Sunday, September 6, 2015

Laser-Zapping Experiment Simulates Beginnings of Life on Earth

Asterix laser

The origin of life on Earth about 4 billion years ago remains one of the biggest unsolved mysteries of science, but a new study is shedding light on the matter.

To recreate the conditions thought to exist on Earth when life began, scientists used a giant laser to ignite chemical reactions that converted a substance found on the early Earth into the molecular building blocks of DNA, the blueprint for life.

The findings not only offer support for theories of how life first formed, but could also aid in the search for signs of life elsewhere in the universe, the researchers said.

The beginning of life coincides with a hypothetical event that occurred 4 billion to 3.85 billion years ago, known as the Late Heavy Bombardment, in which asteroids pummeled Earth and the solar system's other inner planets. These impacts may have provided the energy to jumpstart the chemistry of life, scientists say.

In 1952, the chemists Stanley Miller and Harold Urey conducted a famous experiment at the University of Chicago in which they simulated the conditions thought to be present on early Earth. This experiment was intended to show how the basic materials for life could be produced from nonliving matter.

Recent studies suggest that asteroid impacts may break down formamide — a molecule thought to be present in early Earth's atmosphere — into genetic building blocks of DNA and its cousin RNA, called nucleobases.

In their new study, chemist Svatopluk Civiš, of the Academy of Sciences of the Czech Republic, and his colleagues used a high-powered laser to break down ionized formamide gas, or plasma, to mimic an asteroid strike on early Earth.

"We want[ed] to simulate the impact of some extraterrestrial body [during] an early stage of the atmosphere of Earth," Civiš told.

They used the Asterix iodine laser, a 490-feet-long (150 meters) machine that packs about 1,000 Joules of power at its peak, which is equivalent to the amount produced by an atomic power station, Civiš said. The laser was only switched on for half a nanosecond, however, because that is comparable to the time frame for an asteroid impact, he said.

The reaction produced scalding temperatures of up to 7,640 degrees Fahrenheit (4,230 degrees Celsius), sending out a shock wave and spewing intense ultraviolet and X-ray radiation. The chemical fireworks produced four of the nucleobases that collectively make up DNA and RNA: adenine, guanine, cytosine and uracil.

Using sensitive spectroscopic instruments, the researchers observed the intermediate products of the chemical reactions. These instruments measure the chemical fingerprint of the molecules formed during the course of a reaction. Afterward, the team used a mass spectrometer, a device that measures the masses of chemicals, to detect the final products of the reactions.

The breakdown of formamide produced two highly reactive chemicals or "free radicals" of Carbon and Nitrogen (CN) and Nitrogen and Hydrogen (NH), which could have reacted with formamide itself to produce the genetic nucleobases, the researchers said.

The findings, detailed today (Dec. 8) in the journal Proceedings of the National Academy of Sciences, provide a more detailed mechanism for how the basic chemistry of life got started.
The results of the study could offer clues for how to look for molecules that could give rise to life on other planets, the researchers said. The Late Heavy Bombardment could have created similar reactions on other rocky planets in the solar system, but these may not have had water and other conditions necessary for life, Civiš said. For example, Earth contained clay, which may have protected these building blocks of life from the very bombardment that created them.

"The emergence of terrestrial life is not the result of an accident but a direct consequence of the conditions on the primordial Earth and its surroundings," the scientists wrote in the study.

Wikipedia's Gender Problem Gets a Closer Look

Wikipedia

Wikipedia has a gender problem.

The online, crowdsourced encyclopedia is open to anyone who wants to edit it, but surveys suggest that nearly 90 percent of these volunteer "Wikipedians" are male. A 2011 editor survey by the Wikimedia Foundation pegged the number of active female editors at only 9 percent. Other surveys have found slightly different percentages, but none exceed about 15 percent female representation worldwide.

Now, researchers are delving into how that gender schism affects the content of Wikipedia, even as the Wikimedia Foundation and independent groups search for ways to get more women involved.

"This is something that people have lots of opinions about, but about which there is very little serious research," said Julia Adams, a sociologist at Yale University who is currently running a study on how academia is portrayed on Wikipedia compared with the actual structure and demographics of the academic world.

Sexism on the Web

Adams' work, which is supported by the National Science Foundation, has already come under fire. A blurb on the ongoing study appeared in Sen. Tom Coburn's (R-Okla.) 2014 "Wastebook," a publication put out by the senator's office that highlights what he believes to be wasteful government spending.

Coburn's focus on Adams' research highlights the challenges inherent in even talking about Wikipedia's gender gap. The "Wastebook" questions whether gender is an issue at all, citing a 2011 op-ed by a conservative writer. But a growing number of voices suggest that sexism is a problem not just on Wikipedia, but all over the Internet.

"Men want to shape the type of discussions that we want to have about technology, and then women's concerns become drowned out by the idea that it's not important," said Zuleyka Zevallos, a sociologist and head of Social Science Insights in Australia, who has written about Wikipedia and gender in the past.

Zevallos pointed to a current online controversy called Gamergate, which began when the ex-boyfriend of a video game developer claimed that she had a romantic relationship with a video game journalist. On Twitter and other sites, the conflict quickly turned complicated and ugly, with death and rape threats leveled at female game developers and journalists.

A similar thread of misogyny appeared after the European Space Agency's Philae probe made a historic landing on a comet on Nov. 12. In an interview during the agency's live broadcast, mission scientist Matt Taylor wore a shirt festooned with scantily clad women, drawing criticism from the scientific and science journalism communities. On Twitter, women who spoke out against the shirt were harassed and received tweets such as "please kill yourself" and "Why is it ugly women gripe about this stuff?"

Wiki women

Any woman with an Internet connection can sign up for a Wikipedia handle and begin editing. But the Wikipedian community arose from the open-source software community, which was heavily male, said Katherine Maher, chief communications officer for Wikimedia. 
"If you draw from a community that is predominately male from the get-go, you do ultimately end up shaping the community," Maher told.

In 2011, Sue Gardner, then the executive director of the Wikimedia Foundation, gathered women's reasons for not editing Wikipedia, and found that they ranged from discomfort with the interface to dislike of Wikipedia's conflict-heavy culture.

"There is an overly aggressive editing of women's pages," Zevallos said, referring to pages that deal with issues of interest to women. Even the Wikipedia page for the word "woman" itself has a history of controversial edits and far more conflict on its "talk" page, where editors discuss changes, than the Wikipedia article on the word "man." Debates range from arguments over bias and feminism to the appropriate weight for women pictured as representative illustrations in the article.

"Women just get tired," Zevallos said.

Why Wikipedia matters

The question of who edits Wikipedia has real implications for the sixth-most-visited website on the Internet. The article on friendship bracelets, for example, runs only 374 words, plus a list of pattern names. Click over to the article on the marble, a toy more often beloved by boys than friendship bracelets, and there are more than 2,000 words on marble history, design, manufacturing and games.

The disparity extends beyond childhood games. In 2013, writer Amanda Filipacchi noted in the New York Times that Wikipedia editors had begun removing female authors from the "American Novelists" category in the encyclopedia and putting them in a subcategory called "American Women Novelists." Male novelists got to stay on the gender-neutral list. (Since the article appeared, an "American Male Novelist" category has been created.)

There are efforts from the Wikimedia Foundation and from grassroots groups to get more women involved with editing (as well as other minorities, as the English-language Wikipedia is largely put together by white editors). One group of editors has set up a gender gap task force to improve pages on famous women and to create more resources for female editors. Wikimedia, as well as individual Wikipedians, have staged "edit-a-thons" geared toward women, such as one in February 2014 that was aimed at getting people to contribute to pages on art, women and feminism.

There is anecdotal evidence that such targeted programs can help. For example, Wikimedia's Maher said, a program in Egypt dubbed the Wikipedia Education Program aims to get students to contribute to Wikipedia by translating English Wikipedia pages to Arabic Wikipedia.

"The participation there is almost 80 percent female," Maher said. Thus, some subcultures of Wikipedia reach large numbers of women.

Adams and her colleague, Hannah Brückner of New York University at Abu Dhabi, are interested in examining how Wikipedia tackles academia. The goal, Adams told Live Science, is to understand how well Wikipedia portrays scientific research and the demographics of the researchers doing the work.

"Girls and women look at Wikipedia, as do boys and men, and this influences how people see, for example, whether they belong in the sciences or not," Adams said.

Initial results should be ready soon, with further information coming in throughout next year, Adams said.

"People have a lot of stake in public knowledge, whether it's a textbook for school or a public encyclopedia," Adams said. "And they should have a stake in it. That's part of the point of our project."

Virtual Reality Affects Brain's 'GPS Cells'

Rat in virtual reality

Virtual reality is a growing technology used in everything from video games to rehab clinics to the battlefield. But a new study in rats shows that the virtual world affects the brain differently than real-world environments, which could offer clues for how the technology could be used to restore navigating ability and memory in humans.

Researchers recorded rats' brain activity while the rodents ran on tiny treadmills in a virtual reality setup. In the virtual world, the animals' brains did not form a mental map of their surroundings like the ones they form in real-life settings, the study showed.

"We are using virtual reality more and more every day, whether for entertainment, military purposes or diagnosis of memory and learning disorders," said Mayank Mehta, a neuroscientist at the University of California, Los Angeles. "We are using it all the time, and we need to know … how does the brain react to virtual reality?"

Brain's GPS

Scientists have found that brain cells act as a positioning system, by creating a mental map of an environment from visual input as well as sounds, smells and other information. The discovery of these "GPS cells" was awarded the 2014 Nobel Prize in physiology or medicine.

Virtual reality creates an artificial environment, but does it activate a mental map the same way as the real world does? To find out, Mehta and his colleagues put rats on treadmills in a 2D virtual reality setup.

"We put a tiny tuxedo or harness around the rodent's chest," Mehta said — the rats are "swaddled like a baby, and a giant IMAX kind of screen goes all around them."
While the rats were exploring the virtual room, the researchers used tiny wires (50 times thinner than a human hair) to measure the response of hundreds of neurons in the animals' brains.

They recorded signals from a brain region called the hippocampus, known to be involved in learning and memory, while the animals explored the virtual room. Alzheimer's disease, stroke and schizophrenia all cause damage to the hippocampus, which interferes with people's ability to find their way in the world.

The researchers compared the brain activity in the virtual room to that measured while the animals explored a real, identical-looking room. When the rats were exploring the real room, their GPS neurons fired off in a pattern that produced a mental map of the environment. But to the researchers' surprise, when the rodents were exploring the virtual room, the same neurons fired seemingly at random — in other words, no mental map was being formed, Mehta said.

The researchers checked to see whether something was wrong with the rats or the measurements, but found nothing, Mehta said.

Mental pedometers

Yet, when the researchers took a closer look at the brain activity of the rats in virtual reality, they found that the signals weren't quite random. Instead, the brain cells were actually keeping track of how many steps the animals took — like a pedometer, Mehta said.
"We think the brain on its own behaves like a pedometer," but turns it into a map of the space by using other cues, such as smells, sounds, memory, he said.

Mehta has a hunch that the way the brain makes a map of space is the same as the way it remembers anything. For example, if someone tells you to remember a random sequence of numbers, it would be very difficult. But if it were part of a song, you may remember it more easily.

"Our brain is very good at picking something up if it comes from different [senses]," Mehta said. So when the brain makes a map of space, in addition to visual information about the scene, it takes into account smells, sounds and other aspects of the environment, he said.

The current study was only in rats, but Mehta thinks human brains probably respond similarly to virtual reality. Previous studies have shown that people with hippocampus damage in virtual reality setups don't form clear mental maps. Before, scientists didn't know if the map was poor because of the participants' brain damage or because of the virtual environment, but the current findings support the latter, Mehta said.

Artificial Intelligence: Friendly or Frightening?

I, Robot

The Royal Society in London - Computer scientists, public figures and reporters have gathered to witness or take part in a decades-old challenge. Some of the participants are flesh and blood; others are silicon and binary. Thirty human judges sit down at computer terminals, and begin chatting. The goal? To determine whether they're talking to a computer program or a real person.

The event, organized by the University of Reading, was a rendition of the so-called Turing test, developed 65 years ago by British mathematician and cryptographer Alan Turing as a way to assess whether a machine is capable of intelligent behavior indistinguishable from that of a human. The recently released film "The Imitation Game," about Turing's efforts to crack the German Enigma code during World War II, is a reference to the scientist's own name for his test.

In the London competition, one computerized conversation program, or chatbot, with the personality of a 13-year-old Ukrainian boy named Eugene Goostman, rose above and beyond the other contestants. It fooled 33 percent of the judges into thinking it was a human being. At the time, contest organizers and the media hailed the performance as an historic achievement, saying the chatbot was the first machine to "pass" the Turing test.

When people think of artificial intelligence (AI) — the study of the design of intelligent systems and machines — talking computers like Eugene Goostman often come to mind. But most AI researchers are focused less on producing clever conversationalists and more on developing intelligent systems that make people's lives easier — from software that can recognize objects and animals, to digital assistants that cater to, and even anticipate, their owners' needs and desires.

But several prominent thinkers, including the famed physicist Stephen Hawking and billionaire entrepreneur Elon Musk, warn that the development of AI should be cause for concern.

Thinking machines

The notion of intelligent automata, as friend or foe, dates back to ancient times.
"The idea of intelligence existing in some form that's not human seems to have a deep hold in the human psyche," said Don Perlis, a computer scientist who studies artificial intelligence at the University of Maryland, College Park.

Reports of people worshipping mythological human likenesses and building humanoid automatons date back to the days of ancient Greece and Egypt, Perlis told Live Science. AI has also featured prominently in pop culture, from the sentient computer HAL 9000 in Stanley Kubrick's "2001: A Space Odyssey" to Arnold Schwarzenegger's robot character in "The Terminator" films

Since the field of AI was officially founded in the mid-1950s, people have been predicting the rise of conscious machines, Perlis said. Inventor and futurist Ray Kurzweil, recently hired to be a director of engineering at Google, refers to a point in time known as "the singularity," when machine intelligence exceeds human intelligence. Based on the exponential growth of technology according to Moore's Law (which states that computing processing power doubles approximately every two years), Kurzweil has predicted the singularity will occur by 2045.

But cycles of hype and disappointment — the so-called "winters of AI" — have characterized the history of artificial intelligence, as grandiose predictions failed to come to fruition. The University of Reading Turing test is just the latest example: Many scientists dismissed the Eugene Goostman performance as a parlor trick; they said the chatbot had gamed the system by assuming the persona of a teenager who spoke English as a foreign language. (In fact, many researchers now believe it's time to develop an updated Turing test.)

Nevertheless, a number of prominent science and technology experts have expressed worry that humanity is not doing enough to prepare for the rise of artificial general intelligence, if and when it does occur. Earlier this week, Hawking issued a dire warning about the threat of AI.

"The development of full artificial intelligence could spell the end of the human race," Hawking told the BBC, in response to a question about his new voice recognition system, which uses artificial intelligence to predict intended words. (Hawking has a form of the neurological disease amyotrophic lateral sclerosis, ALS or Lou Gehrig's disease, and communicates using specialized speech software.)

And Hawking isn't alone. Musk told an audience at MIT that AI is humanity's "biggest existential threat." He also once tweeted, "We need to be super careful with AI. Potentially more dangerous than nukes."

In March, Musk, Facebook CEO Mark Zuckerberg and actor Ashton Kutcher jointly invested $40 million in the company Vicarious FPC, which aims to create a working artificial brain. At the time, Musk told CNBC that he'd like to "keep an eye on what's going on with artificial intelligence," adding, "I think there's potentially a dangerous outcome there."

But despite the fears of high-profile technology leaders, the rise of conscious machines — known as "strong AI" or "general artificial intelligence" — is likely a long way off, many researchers argue.

"I don't see any reason to think that as machines become more intelligent … which is not going to happen tomorrow — they would want to destroy us or do harm," said Charlie Ortiz, head of AI at the Burlington, Massachusetts-based software company Nuance Communications."Lots of work needs to be done before computers are anywhere near that level," he said.

Terminator 3

Machines with benefits

Artificial intelligence is a broad and active area of research, but it's no longer the sole province of academics; increasingly, companies are incorporating AI into their products.
And there's one name that keeps cropping up in the field: Google. From smartphone assistants to driverless cars, the Bay Area-based tech giant is gearing up to be a major player in the future of artificial intelligence.

Google has been a pioneer in the use of machine learning — computer systems that can learn from data, as opposed to blindly following instructions. In particular, the company uses a set of machine-learning algorithms, collectively referred to as "deep learning," that allow a computer to do things such as recognize patterns from massive amounts of data.
For example, in June 2012, Google created a neural network of 16,000 computers that trained itself to recognize a cat by looking at millions of cat images from YouTube videos, The New York Times reported. (After all, what could be more uniquely human than watching cat videos?)

The project, called Google Brain, was led by Andrew Ng, an artificial intelligence researcher at Stanford University who is now the chief scientist for the Chinese search engine Baidu, which is sometimes referred to as "China's Google."

Today, deep learning is a part of many products at Google and at Baidu, including speech recognition, Web search and advertising, Ng told Live Science in an email.

Current computers can already complete many tasks typically performed by humans. But possessing humanlike intelligence remains a long way off, Ng said. "I think we're still very far from the singularity. This isn't a subject that most AI researchers are working toward."
Gary Marcus, a cognitive psychologist at NYU who has written extensively about AI, agreed. "I don't think we're anywhere near human intelligence [for machines]," Marcus told Live Science. In terms of simulating human thinking, "we are still in the piecemeal era."
Instead, companies like Google focus on making technology more helpful and intuitive. And nowhere is this more evident than in the smartphone market.

Artificial intelligence in your pocket

In the 2013 movie "Her," actor Joaquin Phoenix's character falls in love with his smartphone operating system, "Samantha," a computer-based personal assistant who becomes sentient. The film is obviously a product of Hollywood, but experts say that the movie gets at least one thing right: Technology will take on increasingly personal roles in people's daily lives, and will learn human habits and predict people's needs.

Anyone with an iPhone is probably familiar with Apple's digital assistant Siri, first introduced as a feature on the iPhone 4S in October 2011. Siri can answer simple questions, conduct Web searches and perform other basic functions. Microsoft's equivalent is Cortana, a digital assistant available on Windows phones. And Google has the Google app, available for Android phones or iPhones, which bills itself as providing "the information you want, when you need it."

For example, Google Now can show traffic information during your daily commute, or give you shopping list reminders while you're at the store. You can ask the app questions, such as "should I wear a sweater tomorrow?" and it will give you the weather forecast. And, perhaps a bit creepily, you can ask it to "show me all my photos of dogs" (or "cats," "sunsets" or a even a person's name), and the app will find photos that fit that description, even if you haven't labeled them as such.

Given how much personal data from users Google stores in the form of emails, search histories and cloud storage, the company's deep investments in artificial intelligence may seem disconcerting. For example, AI could make it easier for the company to deliver targeted advertising, which some users already find unpalatable. And AI-based image recognition software could make it harder for users to maintain anonymity online.
But the company, whose motto is "Don't be evil," claims it can address potential concerns about its work in AI by conducting research in the open and collaborating with other institutions, company spokesman Jason Freidenfelds told Live Science. In terms of privacy concerns, specifically, he said, "Google goes above and beyond to make sure your information is safe and secure," calling data security a "top priority."

While a phone that can learn your commute, answer your questions or recognize what a dog looks like may seem sophisticated, it still pales in comparison with a human being. In some areas, AI is no more advanced than a toddler. Yet, when asked, many AI researchers admit that the day when machines rival human intelligence will ultimately come. The question is, are people ready for it?

Taking AI seriously

In the 2014 film "Transcendence," actor Johnny Depp's character uploads his mind into a computer, but his hunger for power soon threatens the autonomy of his fellow humans. 

Hollywood isn't known for its scientific accuracy, but the film's themes don't fall on deaf ears. In April, when "Trancendence" was released, Hawking and fellow physicist Frank Wilczek, cosmologist Max Tegmark and computer scientist Stuart Russell published an op-ed in The Huffington Post warning of the dangers of AI.

"It's tempting to dismiss the notion of highly intelligent machines as mere science fiction," Hawking and others wrote in the article."But this would be a mistake, and potentially our worst mistake ever."

Undoubtedly, AI could have many benefits, such as helping to aid the eradication of war, disease and poverty, the scientists wrote. Creating intelligent machines would be one of the biggest achievements in human history, they wrote, but it "might also be [the] last." Considering that the singularity may be the best or worst thing to happen to humanity, not enough research is being devoted to understanding its impacts, they said.

As the scientists wrote, "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."

Sunday, August 2, 2015

Biggest-Ever Telescope Approved for Construction

Artist’s Illustration of the European Extremely Large Telescope

The world's largest telescope has gotten its official construction go-ahead, keeping the enormous instrument on track to start observing the heavens in 2024.
The European Extremely Large Telescope (E-ELT), which will feature a light-collecting surface 128 feet (39 meters) wide, has been greenlit for construction atop Cerro Armazones in Chile's Atacama Desert, officials with the European Southern Observatory (ESO) announced Thursday.

"The decision taken by Council [ESO's chief governing body] means that the telescope can now be built, and that major industrial construction work for the E-ELT is now funded and can proceed according to plan," Tim de Zeeuw, ESO's director general, said in a statement. "There is already a lot of progress in Chile on the summit of Armazones, and the next few years will be very exciting."

E-ELT construction was first approved in June 2012, but on the condition that contracts worth more than 2 million euros ($2.48 million at current exchange rates) could be awarded only after 90 percent of the total funding required to build the telescope (1.083 billion euros, or $1.34 billion, at 2012 prices) had been secured. An exception was made for "civil works," including the leveling of the site and a road up Cerro Armazones, ESO officials said.

The 90-percent threshold was reached in October, when Poland agreed to join ESO, officials said, but making the numbers work took some tweaking. ESO split E-ELT development into two phases: 90 percent of the project's costs go toward "Phase 1," which will get E-ELT up and running, and 10 percent of the costs are allocated to "Phase 2," for the development of nonessential elements. These include about one-quarter of E-ELT's 798 individual mirror segments (which together make up the huge main mirror) and part of the telescope's adaptive optics system, which helps cancel out the blurring effects of Earth's atmosphere.

The current construction approval applies only to Phase 1; contracts for this work will be awarded in late 2015. The Phase 2 components will be approved as more funding becomes available, ESO officials said.

"The funds that are now committed will allow the construction of a fully working E-ELT that will be the most powerful of all the extremely large telescope projects currently planned, with superior light-collecting area and instrumentation," de Zeeuw said. "It will allow the initial characterization of Earth-mass exoplanets, the study of the resolved stellar populations in nearby galaxies as well as ultra-sensitive observations of the deep universe."

As de Zeeuw said, E-ELT is not the only giant ground-based telescope in the works. The Giant Magellan Telescope (GMT) will soon start taking shape atop Las Campanas, another Chilean peak. GMT will arrange seven 27.6-foot-wide (8.4 m) primary mirrors into one light-collecting surface 80 feet (24 m) across; project officials are aiming for "first light" in 2021.

And the Thirty Meter Telescope (TMT) — which, not surprisingly, will boast a light-collecting surface 30 m, or 98 feet, wide — is slated to start observing from Hawaii's Mauna Kea in 2022. Like E-ELT, TMT's primary mirror will be composed of hundreds of relatively small segments.

All three megascopes should help researchers tackle some of the biggest questions in astronomy, including the nature of the mysterious dark matter and dark energy that make up most of the universe.

Cyberwarfare? New System Protects Drones from Hackers.

Drone Cybersecurity

Military drones are often used to store sensitive data, ranging from troop movements to strategic operations. While this may make them vulnerable to enemy interference, a new system is aiming to protect these unmanned aerial vehicles from cyberattacks.

Researchers at the University of Virginia and the Georgia Institute of Technology developed the system and tested it in a series of live, in-flight cyberattack scenarios. As military and commercial drone use continues to grow, protecting against such attacks will become a priority, the scientists said.

When installed on a drone, the System-Aware Secure Sentinel system detects "illogical behaviors" compared to those expected of the vehicle, said project leader Barry Horowitz, a systems and information engineer at the University of Virginia in Charlottesville.

"Detections can serve to initiate automated recovery actions and to alert operators of the attack," Horowitz said in a statement.

In the demonstration, the researchers simulated various threats, including cyberattacks launched from enemies on the ground, attacks from military insiders and interference with supply chains. The "attacks" took place over the course of five days, and focused on interference in four different areas: GPS data, location data, information about imagery, and onboard surveillance/control of payloads.

"The inflight testing gauged the effectiveness of the countermeasure technology in hardening the unmanned system's cyber agility and resiliency under attack conditions," the researchers said.

In each scenario, the cybersecurity system was able to rapidly detect cyberattacks, notify the team and correct the system's performance, the researchers said.

The research center that developed the technology is sponsored by the U.S. Department of Defense. The University of Virginia recently licensed the technology to the software company Mission Secure Inc., which is working to commercialize it for the military, intelligence and civil sectors.

Amazon's Robot 'Elves' Help Fill Cyber Monday Orders.

Amazon Warehouse Robots

On one of the busiest online shopping days of the year, thousands of bright-orange, pancake-shaped robots are buzzing around Amazon's shipping centers, rushing to fill the company's Cyber Monday orders.

Last year, Amazon CEO Jeff Bezos announced that he eventually plans to use drones to deliver packages to online shoppers, but while the Federal Aviation Administration crafts official regulations for the commercial use of drones, the online retail giant has found an intermediate step: flat, wheeled robots that zoom around Amazon's warehouses, carrying 7-foot-tall (2.1 meters) stacks of books, electronics and toys.  

The robots navigate on a grid system made of bar-code stickers stuck to the warehouse floor. The bots know which products to gather by scanning the bar codes as they roll along. The flat robots can slip under shelves full of products, lift them up and transport them back to employees, who then sort out the individual orders. The robots can lift shelves that weigh up to 750 lbs. (340 kilograms), according to the company's website.

While many shoppers rushed out to stores on Black Friday, some waited until Cyber Monday to take advantage of online deals. Cyber Monday is the biggest online shopping day of the year, which means companies like Amazon have a huge number of orders to pack and ship.

In order to uphold its reputation for fast deliveries, Amazon hired 80,000 seasonal workers in anticipation of Cyber Monday and the holiday shopping season, according to a report released by the company. Last year, Amazon sold about 426 items per second on Cyber Monday, and the online retailer expects to sell even more this year.

Robotic Arm in Amazon's Warehouse

Amazon bought the robot-building company Kiva Systems back in 2012 and now has about 15,000 of Kiva Systems' packing robots operating in its shipping centers. The robots are part of a larger packing and shipping system designed to improve efficiency. The system also includes huge robotic arms that can lift large bundles of products, and a sophisticated computer system for sorting items. This year, 10 of Amazon's 109 shipping centers are using robots to pick items and deliver them to employees for packing. 

Dave Clark, Amazon's senior vice president for operations, told the Associated Press that the robots will cut the Tracy, California, shipping center's operating cost by 20 percent. The robots aren't expected to cut any jobs — people are still needed to do more complex tasks, like packing the orders and searching for any damaged products, Clark told the Associated Press.

Amazon's next tech goals go beyond using robots to pack the orders — the company wants to use them for deliveries, too. The eventual goal of the program, called Prime Air, is to have drones drop off packages in customers' yards. However, the FAA has currently banned commercial drone use until regulations are in place in 2015. The FAA would need to grant Amazon an exemption from these rules before the company can continue developing its drone delivery system.

NASA's 1st Deep-Space Capsule in 40 Years Ready for Launch Debut.

NASA's Orion capsule sits atop a United Launch Alliance Delta 4 Heavy rocket inside the  Mobile Service Tower at Florida's Cape Canaveral Air Force Station ahead of its first test flight, which is scheduled to take place on Dec. 4, 2014.

A spaceship built to carry humans is about to venture into deep space for the first time in more than four decades.

NASA's Orion space capsule is scheduled to blast off on its first test flight Thursday. The unmanned mission, called Exploration Flight Test-1 (EFT-1), will send Orion zooming about 3,600 miles (5,800 kilometers) from Earth, before rocketing back to the planet at high speeds to test out the capsule's heat shield, avionics and a variety of other systems.

No human-spaceflight vehicle has traveled so far since 1972, when the last of NASA's Apollo moon missions came back to Earth. Indeed, in all that time, no craft designed to carry crews has made it beyond low-Earth orbit (LEO), just a few hundred miles from the planet.

If all goes according to plan, Orion will eventually fly farther than any Apollo capsule ever did, taking astronauts to near-Earth asteroids and — by the mid-2030s — the ultimate destination, Mars.

"I gotta tell you, this is special," Bob Cabana, director of NASA's Kennedy Space Center in Florida, said about EFT-1 during a press briefing last month. "This is our first step on that journey to Mars."

The challenges of deep space

Getting people safely to and from destinations in deep space poses challenges that the engineers of NASA's last crewed spaceship, the now-retired space shuttle, never had to consider. (No space shuttle ever traveled beyond Earth orbit.)

For example, if a problem develops aboard a spaceship in LEO, astronauts can theoretically be on the ground in less than an hour. But it would take days for a vehicle out by the moon or beyond to get home, said NASA Orion Program Manager Mark Geyer.

"So you've gotta have highly reliable systems, and you've gotta have capabilities to protect the crew in case of a contingency," he said.

One such capabilitiy will allow crewmembers aboard Orion to survive in their spacesuits for up to six days if the capsule gets depressurized, Geyer added.

"So if we have a totally depressed cabin, they can be in their suits and we can get them home," he said.

Deep-space vehicles are also exposed to higher radiation levels than vessels that stay in Earth orbit, where they are protected by the planet's magnetic field. So the shielding on Orion must be ample to safeguard the capsule's electronic equipment, Geyer said.

(Orion is designed to support astronauts for just 21 days at a time, so the need to protect crewmembers from radiation is not a big design driver. On longer missions — to Mars, for example — astronauts will spend most of their transit time in a deep-space habitat attached to Orion; the capsule's chief job is to get astronauts into space and back home again.)
Astronauts on deep-space missions will also return to Earth at much higher speeds than do crews that never venture beyond orbit.

"So the heat shield has to be different — different materials, different thicknesses," Geyer said. "And, actually, the physics of entry changes when you come back at those higher speeds."

The need to deal with those high re-entry speeds explains why Orion is a capsule, just like the spaceships that took astronauts to the moon and back during the Apollo program.

"The shape is the best shape for coming in from that high speed," said Mike Hawes, Orion program manager at the aerospace firm Lockheed Martin, which built the capsule for NASA.

Different than Apollo

But Orion is far from a carbon copy of the Apollo command module. For starters, it's bigger. Orion, which is designed to carry up to six astronauts, stands 10.8 feet tall (3.3 meters) and measures 16.5 feet (5 m) across the base. The three-person Apollo capsule was 10.6 feet tall by 12.8 feet wide (3.2 by 3.9 m). Orion contains 316 cubic feet (8.9 cubic m) of habitable volume, compared to 218 cubic feet (6.2 cubic m) for Apollo.

Technology has also advanced a great deal since the Apollo command module was put together.

"The Avcoat material, which we're using on the [Orion] heat shield, is similar to the Avcoat used on Apollo, although we have had to make some changes due to materials changes," Hawes said. "But the technology of just about everything else that we used to put in Orion and to build Orion have changed dramatically in that time.

"You think of 50 years of manufacturing changes — it's a totally different world," he added. "And in fact, we do have additive-manufactured [3D-printed] parts on Orion today."

The huge Saturn V rocket that blasted Apollo toward the moon was retired long ago, so Orion will rely on a different launch vehicle as well. EFT-1 will use a United Launch Alliance Delta 4 Heavy rocket, but future Orion missions will ride atop NASA's Space Launch System megarocket (SLS), which is currently in development.

SLS and Orion are scheduled to fly together for the first time in 2017 or 2018, on the capsule's second unmanned test flight; the duo's first manned mission should come in 2021.

3D Printing Can Improve Face Transplants.

A 3D printed model of a face.

Surgeons are using new, highly accurate 3D printers to guide face transplantation operations, making the procedures faster and improving outcomes, according to a new report.

The face replicas made on these printers take into account bone grafts, metal plates and the underlying bone structure of the skull. They improve surgical planning, which ultimately makes the surgery much shorter, the report authors said.

The new technique has already been used in several patients, including two high-profile face transplant patients — Carmen Tarleton, who was maimed by her husband and received a face transplant in 2013, and Dallas Wiens, who was the first person in the U.S. to receive a full face transplant, in 2011.

The surgeries have dramatically improved the lives of the patients, the researchers said.
"They went from having no face and no features at all, to being able to talk and eat and breathe properly," said Dr. Frank Rybicki, a radiologist and the director of the Applied Imaging Science Laboratory at Brigham and Women's Hospital in Boston, who presented the findings today (Dec. 1) at the meeting of the Radiological Society of North America.

Custom fit

For the patients, face transplantation is often the end of a long journey.

"Typically, by the time they come to us, they've had 20 or 30 surgeries already, just to save their lives," Rybicki told

That means that patients may have plates, screws, bone grafts and dozens of other small modifications in their faces, and the new face has to fit perfectly around these. 3D printing allows the team to see exactly where these elements are, making the surgery — which can take up to 25 hours — go more quickly and smoothly, Rybicki said.

Soft tissue

The team printed out the soft tissue for Tarleton, whose estranged husband threw industrial-strength lye (a strong chemical used in soap making) on her face, according to the report.
The lye "literally burned off all the skin and all the squishy stuff in the face, and just left the bone," which was covered by a paper-thin flap of tissue, Rybicki said.

Printing soft tissue requires a sophisticated technique, but it was tremendously helpful because, without 3D printing, it's very difficult to visualize that tissue, Rybicki said.

Since her face transplantation procedure in 2011, Tarleton has done amazingly well, and her facial features have truly become her own, Rybicki said. The tissue has undergone dramatic remodeling, and the face no longer resembles neither her original face nor the donor's face. Now, three years after her operation, it is hard to tell that she was the recipient of a face transplant, Rybicki said.

Images of Tarleton's face will be revealed at the meeting later today.

The team also created 3D-printed versions of the new soft-tissue structure at Tarleton's follow-up appointments. As a result, they can document some of the facial remodeling that Tarleton has undergone, Rybicki said.

New innovations

Having a better understanding of the facial anatomy can also improve outcomes in less dramatic types of facial reconstruction, said Dr. Edward Caterson, a plastic surgeon at Brigham and Women's Hospital who is part of the same face transplant team.

For example, when someone's jaw is destroyed, doctors typically harvest a piece of rib or leg bone to replace the missing jaw. Because the tibia, or leg bone, is quite straight, it's tricky to cut it for a perfect fit. 3D printing allows that cut to be done more precisely, Caterson said.

"We're also getting an opportunity to innovate surgically, due to the fact we can do this planning preoperatively," Caterson told.

Recently, 3D printing enabled Caterson to harvest bone from a completely new location — the femur, or thigh bone. Though doctors often use rib grafts to replace jawbone, ribs don't have their own blood supply, so they typically collapse after a few years.

3D modeling allowed Caterson to use a portion of the femur that has its own blood supply, which should last much longer, he said.

Invisible Dark Matter May Show Up in GPS Signals.

Astronomers estimate that the visible matter in Pandora's Cluster only makes up five percent of its mass. They believe the rest is made of dark matter.

GPS satellites are crucial for navigation, but now researchers think this technology could be used for an unexpected purpose: finding traces of enigmatic dark matter that is thought to lurk throughout the universe.

Physicists estimate there is nearly six times as much dark matter in the universe as there is visible matter. But despite a decades-long search, scientists have yet to find direct evidence of invisible dark matter, and its existence is inferred based on its gravitational pull on galaxies and other celestial bodies. Without the extra force of gravity from dark matter, researchers say, galaxies wouldn't be able to hold themselves together.

Physicists don't know what dark matter is made of, but some think it's composed of particles that barely interact with the visible world, which is why dark matter is invisible and has been difficult to detect.

However, Andrei Derevianko, a professor of physics at the University of Nevada, Reno, and Maxim Pospelov, a professor of physics and astronomy at the University of Victoria, British Columbia, have proposed that dark matter isn't made of particles at all. The researchers think dark matter may be a topological defect — a kind of tear in the fabric of space-time that can't be repaired. They think these patches of dark matter drifting by could interrupt GPS satellites and atomic clock systems.

To search for the theoretical patches of dark matter, the team is using GPS data from the Geodetic Lab, in Reno, that pulls in data from more than 12,000 GPS stations around the world. In particular, the researchers are focusing on GPS satellites that use atomic clocks for navigation.

GPS satellites orbiting above Earth and their ground-based networks have synchronized clocks, and Derevianko and Pospelov think when clumps of dark matter drift by, they could cause interference between the two.

"The idea is, where the atomic clocks go out of synchronization, we would know that dark matter, the topological defect, has passed by," Derevianko said in a statement. "In fact, we envision using the GPS constellation as the largest human-built dark-matter detector."

It shouldn't take much to detect dark matter blowing by, the researchers said. It would only need to desynchronize the clocks by slightly more than a billionth of a second. The researchers also think these theoretical dark matter clumps travel at different speeds than other phenomena that could similarly desynchronize atomic clocks, such as solar flares. The different speeds would have different effects on the atomic clocks, the scientists said.

Glenn Starkman, a professor of physics and astronomy at Case Western Reserve University in Cleveland, Ohio, who was not involved with the research, said it makes sense to first search for dark matter within the limits of the Standard Model, the reigning theory of particle physics that outlines how the universe should behave. This means looking for dark matter particles, not clumps, Starkman told Live Science. But, researchers working at underground particle detectors and the Large Hadron Collider (LHC), the world's largest atom smasher, where the once-elusive Higgs boson was discovered, have so far failed to find any dark matter particles.

An unusual idea like this one could help spark some alternative ideas for what makes up dark matter, said Dan Hooper, a researcher at the Fermi National Accelerator Lab in Illinois, who was also not involved with the study. That is, Hooper said, if physicists don't spot dark matter particles in the next couple of years.

New Artificial Intelligence Challenge Could Be the Next Turing Test.


A recently released biopic of Alan Turing ("The Imitation Game") tells the story of the British mathematician and cryptographer who built a machine to crack the German Enigma code during World War II. But Turing is perhaps best known for his pioneering work on artificial intelligence.

In 1950, Turing introduced a landmark test of artificial intelligence. In the so-called Turing test, a person engages in simultaneous conversations with both a human and a computer, and tries to determine which is which. If the computer can convince the person it is human, Turing would consider it artificially intelligent.

The Turing test has been a helpful gauge of progress in the field of artificial intelligence (AI), but it is more than 60 years old, and researchers are developing a successor that they say is better adapted to the field of AI today. 

The Winograd Schema Challenge consists of a set of multiple-choice questions that require common sense reasoning, which is easy for a human, but surprisingly difficult for a machine. The prize for the annual competition, sponsored by the Burlington, Massachusetts-based software company Nuance Communications, is $25,000.

"Really the only approach to measuring artificial intelligence is the idea of the Turing test," said Charlie Ortiz, senior principal manager of AI at Nuance. "But the problem is, it encourages the development of programs that can talk but don't necessarily understand."

The Turing test also encourages trickery, Ortiz told Live Science. Like politicians, instead of giving a direct answer, machines can change the subject or give a stock answer. "The Turing test is a good test for a future in politics," he said.

Earlier this year, a computer conversation program, or "chatbot," named Eugene Goostman was said to have passed the Turing test at a competition organized by the University of Reading, in England. But experts say the bot gamed the system by claiming to speak English as a second language, and by assuming the persona of a 13-year-old boy, who would dodge questions and give unpredictable answers.

In contrast to the Turing test, the Winograd Schema Challenge doesn't allow participants to change the subject or talk their way around questions — they must answer the questions asked. For example, a typical question might be, "Paul tried to call George on the phone, but he wasn't successful. Who was not successful?" The correct answer is Paul, but the response requires common sense reasoning.

"What this test tries to do is require the test taker to do some thinking to understand what's being said," Ortiz said, adding, "The winning program wouldn't be able to just guess."

Although the Winograd Schema Challenge has some advantages over the Turing test, it doesn't test every ability that a truly intelligent entity should possess. For example, Gary Marcus, a neuroscientist at New York University, has promoted the concept of a visual Turing test, in which a machine would watch videos and answer questions about them.