no future but what you make

skynet under development:

BBC wrote:IBM to build brain-like computers

IBM has announced it will lead a US government-funded collaboration to make electronic circuits that mimic brains.

Part of a field called "cognitive computing", the research will bring together neurobiologists, computer and materials scientists and psychologists.

As a first step in its research the project has been granted $4.9m (£3.27m) from US defence agency Darpa.

The resulting technology could be used for large-scale data analysis, decision making or even image recognition.

"The mind has an amazing ability to integrate ambiguous information across the senses, and it can effortlessly create the categories of time, space, object, and interrelationship from the sensory data," says Dharmendra Modha, the IBM scientist who is heading the collaboration.

"There are no computers that can even remotely approach the remarkable feats the mind performs," he said.

"The key idea of cognitive computing is to engineer mind-like intelligent machines by reverse engineering the structure, dynamics, function and behaviour of the brain."

'Perfect storm'

IBM will join five US universities in an ambitious effort to integrate what is known from real biological systems with the results of supercomputer simulations of neurons. The team will then aim to produce for the first time an electronic system that behaves as the simulations do.

The longer-term goal is to create a system with the level of complexity of a cat's brain.

Prof Modha says that the time is right for such a cross-disciplinary project because three disparate pursuits are coming together in what he calls a "perfect storm".

Neuroscientists working with simple animals have learned much about the inner workings of neurons and the synapses that connect them, resulting in "wiring diagrams" for simple brains.

Supercomputing, in turn, can simulate brains up to the complexity of small mammals, using the knowledge from the biological research. Modha led a team that last year used the BlueGene supercomputer to simulate a mouse's brain, comprising 55m neurons and some half a trillion synapses.

"But the real challenge is then to manifest what will be learned from future simulations into real electronic devices - nanotechnology," Prof Modha said.

Technology has only recently reached a stage in which structures can be produced that match the density of neurons and synapses from real brains - around 10 billion in each square centimetre.


Researchers have been using bits of computer code called neural networks that seek to represent connections of neurons. They can be programmed to solve a particular problem - behaviour that appears to be the same as learning.

But this approach is fundamentally different.

"The issue with neural networks and artificial intelligence is that they seek to engineer limited cognitive functionalities one at a time. They start with an objective and devise an algorithm to achieve it," Prof Modha says.

"We are attempting a 180 degree shift in perspective: seeking an algorithm first, problems second. We are investigating core micro- and macro-circuits of the brain that can be used for a wide variety of functionalities."

The problem is not in the organisation of existing neuron-like circuitry, however; the adaptability of brains lies in their ability to tune synapses, the connections between the neurons.

Synaptic connections form, break, and are strengthened or weakened depending on the signals that pass through them. Making a nano-scale material that can fit that description is one of the major goals of the project.

"The brain is much less a neural network than a synaptic network," Modha says.

First thought

The fundamental shift toward putting the problem-solving before the problem makes the potential applications for such devices practically limitless.

Free from the constraints of explicitly programmed function, computers could gather together disparate information, weigh it based on experience, form memory independently and arguably begin to solve problems in a way that has so far been the preserve of what we call "thinking".

"It's an interesting effort, and modelling computers after the human brain is promising," says Christian Keysers, director of the neuroimaging centre at University Medical Centre Groningen. However, he warns that the funding so far is likely to be inadequate for such an large-scale project.

That the effort requires the expertise of such a variety of disciplines means that the project is unprecedented in its scope, and Dr Modha admits that the goals are more than ambitious.

"We are going not just for a homerun, but for a homerun with the bases loaded," he says.

Re: no future but what you make

in semi-related skynet news...

NYtimes wrote:Scientists Worry Machines May Outsmart Man

A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.

Their concern is that further advances could create profound social disruptions and even have dangerous consequences.

As examples, the scientists pointed to a number of technologies as diverse as experimental medical systems that interact with patients to simulate empathy, and computer worms and viruses that defy extermination and could thus be said to have reached a “cockroach” stage of machine intelligence.

While the computer scientists agreed that we are a long way from Hal, the computer that took over the spaceship in “2001: A Space Odyssey,” they said there was legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors.

The researchers — leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California — generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet. But they agreed that robots that can kill autonomously are either already here or will be soon.

They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones?

The researchers also discussed possible threats to human jobs, like self-driving cars, software-based personal assistants and service robots in the home. Just last month, a service robot developed by Willow Garage in Silicon Valley proved it could navigate the real world.

A report from the conference, which took place in private on Feb. 25, is to be issued later this year. Some attendees discussed the meeting for the first time with other scientists this month and in interviews.

The conference was organized by the Association for the Advancement of Artificial Intelligence, and in choosing Asilomar for the discussions, the group purposefully evoked a landmark event in the history of science. In 1975, the world’s leading biologists also met at Asilomar to discuss the new ability to reshape life by swapping genetic material among organisms. Concerned about possible biohazards and ethical questions, scientists had halted certain experiments. The conference led to guidelines for recombinant DNA research, enabling experimentation to continue.

The meeting on the future of artificial intelligence was organized by Eric Horvitz, a Microsoft researcher who is now president of the association.

Dr. Horvitz said he believed computer scientists must respond to the notions of superintelligent machines and artificial intelligence systems run amok.

The idea of an “intelligence explosion” in which smart machines would design even more intelligent machines was proposed by the mathematician I. J. Good in 1965. Later, in lectures and science fiction novels, the computer scientist Vernor Vinge popularized the notion of a moment when humans will create smarter-than-human machines, causing such rapid change that the “human era will be ended.” He called this shift the Singularity.

This vision, embraced in movies and literature, is seen as plausible and unnerving by some scientists like William Joy, co-founder of Sun Microsystems. Other technologists, notably Raymond Kurzweil, have extolled the coming of ultrasmart machines, saying they will offer huge advances in life extension and wealth creation.

“Something new has taken place in the past five to eight years,” Dr. Horvitz said. “Technologists are providing almost religious visions, and their ideas are resonating in some ways with the same idea of the Rapture.”

The Kurzweil version of technological utopia has captured imaginations in Silicon Valley. This summer an organization called the Singularity University began offering courses to prepare a “cadre” to shape the advances and help society cope with the ramifications.

“My sense was that sooner or later we would have to make some sort of statement or assessment, given the rising voice of the technorati and people very concerned about the rise of intelligent machines,” Dr. Horvitz said.

The A.A.A.I. report will try to assess the possibility of “the loss of human control of computer-based intelligences.” It will also grapple, Dr. Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine that is as intelligent as your spouse?

Dr. Horvitz said the panel was looking for ways to guide research so that technology improved society rather than moved it toward a technological catastrophe. Some research might, for instance, be conducted in a high-security laboratory.

The meeting on artificial intelligence could be pivotal to the future of the field. Paul Berg, who was the organizer of the 1975 Asilomar meeting and received a Nobel Prize for chemistry in 1980, said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable.

“If you wait too long and the sides become entrenched like with G.M.O.,” he said, referring to genetically modified foods, “then it is very difficult. It’s too complex, and people talk right past each other.”

Tom Mitchell, a professor of artificial intelligence and machine learning at Carnegie Mellon University, said the February meeting had changed his thinking. “I went in very optimistic about the future of A.I. and thinking that Bill Joy and Ray Kurzweil were far off in their predictions,” he said. But, he added, “The meeting made me want to be more outspoken about these issues and in particular be outspoken about the vast amounts of data collected about our personal lives.”

Despite his concerns, Dr. Horvitz said he was hopeful that artificial intelligence research would benefit humans, and perhaps even compensate for human failings. He recently demonstrated a voice-based system that he designed to ask patients about their symptoms and to respond with empathy. When a mother said her child was having diarrhea, the face on the screen said, “Oh no, sorry to hear that.”

A physician told him afterward that it was wonderful that the system responded to human emotion. “That’s a great idea,” Dr. Horvitz said he was told. “I have no time for that.”

Re: no future but what you make

Hplusmagazine wrote:Using Human “Wetware” to Control Robots

What happens when a man is merged with a computer or a robot? This is the question that Professor Kevin Warwick and his team at the department of Cybernetics, University of Reading in the UK have been trying to answer for a number of years.

There are many ways to look at this problem. There is the longer term prospect of freeing the mind from the limitations of the brain by uploading it in digital form, potentially onto a computer and/or robotic substrate (see the h+ interview with Dr. Bruce Katz, Will We Eventually Upload Our Minds?). There is also a shorter term prospect at a much more limited scale — a robot controlled by human brain cells could soon be wandering around Professor Warwick’s UK labs.

Professor Warwick (who incidentally has a device implanted in his left arm that enables his nervous system to be connected to a computer) and his colleague Ben Whalley from the School of Pharmacy recently created a robot that is controlled by cultured rat neurons. The next step in their research is to use a human neuron cell line, a type of “wetware.”

As reported in New Scientist, some 300,000 rat neurons grown in a nutrient broth and producing spikes of electrical activity were connected to the output of a small robot's distance sensors. The neurons proved capable of steering the robot around an enclosure. Here’s the New Scientist video of the robot courtesy of the University of Reading:

This research is the first step in examining how memories create neurological structures in the brain, and how the brain stores specific pieces of data. The researchers hope that this will lead to a better understanding of diseases and disorders that affect the brain such as Alzheimer's, Parkinson's, stroke, and brain injury.

Warwick comments, "This new research is tremendously exciting as firstly the biological brain controls its own moving robot body, and secondly it will enable us to investigate how the brain learns and memorizes its experiences. This research will move our understanding forward of how brains work, and could have a profound effect on many areas of science and medicine."

Warwick, Whalley, and colleagues don’t need specific ethical approval from the University or the UK move forward with the human neuron cell line as soon as they are ready. The cultures are available on the open market and "the ethical side of sourcing is done by the company from whom they are purchased,” according to Whalley.

The use of the term “wetware” has been around since the mid-1950s. In the recent academic literature, it refers to cells (that are “wet”) built out of molecular circuits that perform logical operations, as electronic devices do, but with unique properties. Mathematician and science fiction writer Rudy Rucker used the term as the title of his 1988 cyberpunk novel, and later defined it in the book Mondo 2000: A User’s Guide to the New Edge (edited by some fellow named R.U. Sirius) as the “physical DNA in a cell.” Rucker now refers to physical DNA in a 2007 blog entry as “lower level” wetware, with higher-level wetware defined as, “The arrangement of a body’s cells –- and the all-important tangling of the cortical neurons…”

According to a University of Reading press release, the “wetware” biological brain used by the UK robot is made up of cultured neurons that are placed onto a multi-electrode array (MEA). The MEA is a dish with approximately 60 electrodes that pick up the electrical signals generated by the cells.

Biologically-generated signals drive the movement of the robot.

The biologically-generated signals drive the movement of the robot. Every time the robot nears an object, the electrodes generate signals to stimulate the brain. In response, the brain's output is used to drive the wheels of the robot left and right so that it avoids hitting objects. The robot has no additional control from a human or a computer –- its sole means of control is from its own brain. Dr. Whalley comments, "One of the fundamental questions that scientists are facing today is how we link the activity of individual neurons with the complex behaviors that we see in whole organisms. This project gives us a really unique opportunity to look at something which may exhibit complex behaviors, but still remain closely tied to the activity of individual neurons. Hopefully we can use that to go some of the way to answer some of these very fundamental questions."

While this isn’t exactly merging a man with a computer, it is merging some significant human carbon-based “wetware” (in Rucker’s 2007 definition of the term) with some sophisticated silicon-based circuitry in robotic form. Does this mean that whole brain implants into cyborg bodies are in our future?

oh good, a robot with a human brain that learns and thinks. what could possibly go wrong?


Re: no future but what you make

Cnet wrote:Be afraid: DARPA unveils Terminator-like Atlas robot
Atlas looks like the prototype for a future robot infantryman, and it can tackle rough terrain and carry human tools. Can you say "Skynet"?
If you're short of nightmare fuel, say hello to Atlas.

On Thursday, DARPA unveiled this hulking, 6-foot robot developed by Boston Dynamics, creator of the infamous BigDog and other scary creatures. Surprisingly, the 330-pound terror is designed to help us meatsacks.

Atlas is a testbed humanoid for disaster response, but it looks like it knows its way around a phased plasma rifle in the 40-watt range. Fortunately, it comes from Massachusetts, not the future.

We've seen hints of Atlas with Boston Dynamics' Petman soldier robot, which can do pushups and run on a treadmill.

Whereas that humanoid was designed to test chemical protection clothing, Atlas is altogether different. It's designed to not only walk and carry things, but can travel through rough terrain outdoors and climb using its hands and feet.

"Articulated, sensate hands will enable Atlas to use tools designed for human use," Boston Dynamics says. "Atlas includes 28 hydraulically actuated degrees of freedom, two hands, arms, legs, feet, and a torso."

Its head includes stereo cameras and a laser range finder. It's tethered to an off-board, electric power supply -- at least that's one weakness.

The DARPA Robotics Challenge is designed to help evolve machines that can cope with disasters and hazardous environments like nuclear power plant accidents.

The seven teams currently in the challenge will get their own Atlas bot and then program it until December, when trials will be held at the Homestead Miami Speedway in Florida.

They will be presented with tasks such as driving a utility vehicle, walking over uneven terrain, clearing debris, breaking through a wall, closing a valve, and connecting a fire hose.

Meanwhile, check out Atlas' other weakness in the vid below -- it's got an unstoppable desire to groove. As the English playwright William Congreve observed, music has charms to soothe the savage robot.
i wish they'd just change their name to massive dynamic and be done with it.
You do not have the required permissions to view the files attached to this post.