没人转载过吗?IBM完成第一批类人脑芯片的设计

来源:百度文库 编辑:超级军网 时间:2024/04/28 23:53:35
IBM has unveiled an experimental chip that borrows tricks from brains to power a cognitive computer, a machine able to learn from and adapt to its environment.

Reactions to the computer giant’s press release about SyNAPSE, short for Systems of Neuromorphic Adaptive Plastic Scalable Electronic, have ranged from conservative to zany. Some even claim it’s IBM’s attempt to recreate a cat brain from silicon.

“Each neuron in the brain is a processor and memory, and part of a social network, but that’s where the brain analogy ends. We’re not trying to simulate a brain,” said IBM spokeswoman Kelly Sims. “We’re looking to the brain to develop a system that can learn and make sense of environments on the fly.”

The human brain is a vast network of roughly 100 billion neurons sharing 100 trillion connections, called synapses. That complexity makes for more mysteries than answers — how consciousness arises, how memories are stored and why we sleep are all outstanding questions. But researchers have learned a lot about how neurons and their connections underpin the power, efficiency and adaptability of the brain.

To get a better understanding of SyNAPSE and how it borrows from organic neural networks, Wired.com spoke with project leader Dharmendra Modha of IBM Research.

Wired.com: Why do we want computers to learn and work like brains?

Dharmendra Modha: We see an increasing need for computers to be adaptable, to develop functionality today’s computers can’t. Today’s computers can carry out fast calculations. They’re left-brain computers, and are ill-suited for right-brain computation, like recognizing danger, the faces of friends and so on, that our brains do so effortlessly.

The analogy I like to use: You wouldn’t drive a car without half a brain, yet we have been using only one type of computer. It’s like we’re adding another member to the family.


Wired.com: So, you don’t view SyNAPSE as a replacement for modern computers?

Modha: I see each system as as complementary. Modern computers are good at some things — they have been with us since ENIAC, and I think they will be with us for perpetuity — but they aren’t well-suited for learning.

A modern computer, in its elementary form, is a block of memory and a processor separated by a bus, a communication pathway. If you want to create brain-like computation, you need to emulate the states of neurons, synapses, and the interconnections amongst neurons in the memory, the axons. You have to fetch neural states from the memory, send them to the processor across the bus, update them, send them back and store them in the memory. It’s a cycle of store, fetch, update, store … and on and on.

To deliver real-time and useful performance, you have to run this cycle very, very fast. And that leads to ever-increasing clock rates. ENIAC’s was about 100 KHz. In 1978 they were 4.7 MHz. Today’s processors are about 5 GHz. If you want faster and faster clock rates, you achieve that by building smaller and smaller devices.

Wired.com: And that’s where we run into trouble, right?

Modha: Exactly. There are two fundamental problems with this trajectory. The first is that, very soon, we will hit hard physical limits. Mother nature will stop us. Memory is the next problem. As you shorten the distance between small elements, you leak current at exponentially higher rates. At some point the system isn’t useful.

So we’re saying, let’s go back a few million years instead of ENIAC. Neurons are about 10 Hz, on average. The brain doesn’t have ever-increasing clock rates. It’s a social network of neurons.

Wired.com: What do you mean by a social network?

Modha: The links between the neurons are synapses, and that’s the important thing — how is your network wired? Who are your friends, and how close are they? You can think of the brain as a massively, massively parallel distributed computation system.

Suppose that you would like to map this computation onto one of today’s computers. They’re ill-suited for this and inefficient, so we’re looking to the brain for a different approach. Let’s build something that looks like that, on a basic level, and see how well that performs. Build a massively, massively, massively parallel distributed substrate. And that means, like in the brain, bringing your memory extremely close to a processor.

It’s like an orange farm in Florida. The trees are the memory, and the oranges are bits. Each of us, we’re the neurons who consume and process them. Now, you could be collecting them and transporting them over long distances, but imagine having your own small, private orange grove. Now you don’t have to move that data over long distances to get it. And your neighbors are nearby with their orange trees. The whole paradigm is a huge sea of synapse-like memory elements. It’s an invisible layer of processing.

Wired.com: In the brain, neural connections are plastic. They change with experience. How can something hard-wired do this?

Modha: The memory holds the synapse-like state, and it can be adapted in real-time to encode correlations, associations and causality or anti-causality. There’s a saying out there, “neurons that fire together, wire together.” The firing of neurons can strengthen or weaken synapses locally. That’s how learning is affected.

Wired.com: So let’s suppose we have a scaled-up learning computer. How do you coax it do something useful for you?

Modha: This is a platform of technology that is adaptable in ubiquitous, changing environments. Like the brain, there is almost a limitless array of applications. The brain can take information from sight, touch, sound, smell and other senses and integrate them into modalities. By modalities I mean events like speech, walking and so on.

Those modalities, the entire computation, goes back to neural connections. Their strength, their location, who is and who is not talking to whom. It is possible to reconfigure some parts of this network for different purposes. Some things are universal to all organisms with a brain — the presence of an edge, textures, colors. Even learn before you’re born, you can recognize them. They’re natural.

Knowing your mother’s face, through nurture, comes later. Imagine a hierarchy of programming techniques, a social network of chip neurons that talk and can be adapted and reconfigured to carry out tasks you desire. That’s where we’d like to end up with this.

http://www.wired.com/wiredscience/2011/08/ibm-synapse-cognitive-computer/IBM has unveiled an experimental chip that borrows tricks from brains to power a cognitive computer, a machine able to learn from and adapt to its environment.

Reactions to the computer giant’s press release about SyNAPSE, short for Systems of Neuromorphic Adaptive Plastic Scalable Electronic, have ranged from conservative to zany. Some even claim it’s IBM’s attempt to recreate a cat brain from silicon.

“Each neuron in the brain is a processor and memory, and part of a social network, but that’s where the brain analogy ends. We’re not trying to simulate a brain,” said IBM spokeswoman Kelly Sims. “We’re looking to the brain to develop a system that can learn and make sense of environments on the fly.”

The human brain is a vast network of roughly 100 billion neurons sharing 100 trillion connections, called synapses. That complexity makes for more mysteries than answers — how consciousness arises, how memories are stored and why we sleep are all outstanding questions. But researchers have learned a lot about how neurons and their connections underpin the power, efficiency and adaptability of the brain.

To get a better understanding of SyNAPSE and how it borrows from organic neural networks, Wired.com spoke with project leader Dharmendra Modha of IBM Research.

Wired.com: Why do we want computers to learn and work like brains?

Dharmendra Modha: We see an increasing need for computers to be adaptable, to develop functionality today’s computers can’t. Today’s computers can carry out fast calculations. They’re left-brain computers, and are ill-suited for right-brain computation, like recognizing danger, the faces of friends and so on, that our brains do so effortlessly.

The analogy I like to use: You wouldn’t drive a car without half a brain, yet we have been using only one type of computer. It’s like we’re adding another member to the family.


Wired.com: So, you don’t view SyNAPSE as a replacement for modern computers?

Modha: I see each system as as complementary. Modern computers are good at some things — they have been with us since ENIAC, and I think they will be with us for perpetuity — but they aren’t well-suited for learning.

A modern computer, in its elementary form, is a block of memory and a processor separated by a bus, a communication pathway. If you want to create brain-like computation, you need to emulate the states of neurons, synapses, and the interconnections amongst neurons in the memory, the axons. You have to fetch neural states from the memory, send them to the processor across the bus, update them, send them back and store them in the memory. It’s a cycle of store, fetch, update, store … and on and on.

To deliver real-time and useful performance, you have to run this cycle very, very fast. And that leads to ever-increasing clock rates. ENIAC’s was about 100 KHz. In 1978 they were 4.7 MHz. Today’s processors are about 5 GHz. If you want faster and faster clock rates, you achieve that by building smaller and smaller devices.

Wired.com: And that’s where we run into trouble, right?

Modha: Exactly. There are two fundamental problems with this trajectory. The first is that, very soon, we will hit hard physical limits. Mother nature will stop us. Memory is the next problem. As you shorten the distance between small elements, you leak current at exponentially higher rates. At some point the system isn’t useful.

So we’re saying, let’s go back a few million years instead of ENIAC. Neurons are about 10 Hz, on average. The brain doesn’t have ever-increasing clock rates. It’s a social network of neurons.

Wired.com: What do you mean by a social network?

Modha: The links between the neurons are synapses, and that’s the important thing — how is your network wired? Who are your friends, and how close are they? You can think of the brain as a massively, massively parallel distributed computation system.

Suppose that you would like to map this computation onto one of today’s computers. They’re ill-suited for this and inefficient, so we’re looking to the brain for a different approach. Let’s build something that looks like that, on a basic level, and see how well that performs. Build a massively, massively, massively parallel distributed substrate. And that means, like in the brain, bringing your memory extremely close to a processor.

It’s like an orange farm in Florida. The trees are the memory, and the oranges are bits. Each of us, we’re the neurons who consume and process them. Now, you could be collecting them and transporting them over long distances, but imagine having your own small, private orange grove. Now you don’t have to move that data over long distances to get it. And your neighbors are nearby with their orange trees. The whole paradigm is a huge sea of synapse-like memory elements. It’s an invisible layer of processing.

Wired.com: In the brain, neural connections are plastic. They change with experience. How can something hard-wired do this?

Modha: The memory holds the synapse-like state, and it can be adapted in real-time to encode correlations, associations and causality or anti-causality. There’s a saying out there, “neurons that fire together, wire together.” The firing of neurons can strengthen or weaken synapses locally. That’s how learning is affected.

Wired.com: So let’s suppose we have a scaled-up learning computer. How do you coax it do something useful for you?

Modha: This is a platform of technology that is adaptable in ubiquitous, changing environments. Like the brain, there is almost a limitless array of applications. The brain can take information from sight, touch, sound, smell and other senses and integrate them into modalities. By modalities I mean events like speech, walking and so on.

Those modalities, the entire computation, goes back to neural connections. Their strength, their location, who is and who is not talking to whom. It is possible to reconfigure some parts of this network for different purposes. Some things are universal to all organisms with a brain — the presence of an edge, textures, colors. Even learn before you’re born, you can recognize them. They’re natural.

Knowing your mother’s face, through nurture, comes later. Imagine a hierarchy of programming techniques, a social network of chip neurons that talk and can be adapted and reconfigured to carry out tasks you desire. That’s where we’d like to end up with this.

http://www.wired.com/wiredscience/2011/08/ibm-synapse-cognitive-computer/
国内鸟文翻译
  蓝色巨人IBM今天宣布他们已与四所大学和研究机构DARPA合作完成了一款革命性电脑芯片的基本设计,该芯片将被用来模拟大脑处理信息的方式——即具备感知,互动和识别等各种能力。

  DARPA的首席研究员和IBM Almaden研究中心的研究员Dharmendra Modha就说:“它是新一代计算机的种子,而新一代计算机将结合超级计算,神经科学和纳米技术”。如果该人脑模拟芯片最终可以实现商业化生产,那么它将颠覆传统的计算形式,转而以更加具备思考能力的人造大脑的形式代替。其最终的应用将对商业,科学和政府产生巨大影响。

  现在,研究人员已完成该项目的第一阶段,也就是设计一个可以不断被复制的基本计算单元,从而最终形成模拟人脑计算机的基本架构。这种新的计算单元(或者称为核心)主要模拟人的大脑。它不仅能通过“神经元”或数字处理器来计算信息,也有人脑学习和记忆的基础“突触”。此外它还有连接计算机组织的“轴突”或数据通路。

  虽然概念听起来很简单,但是该计算单元与现今大多数计算机的运行方式截然不同。现代计算机主要基于冯诺依曼架构,内存和处理器是分开的,并通过总线作为数据通路连接。在过去的65年,冯诺依曼式计算机已经进化得越来越快了,也能以更高的速度通过总线发送更多的数据。但由于一台计算机的速度往往被总线的容量所限制,导致出现了“冯诺依曼瓶颈。”


  而模拟人脑的芯片则不同,内存包含在芯片里面。虽然运行不是很快,发送的数据也只有10赫兹,远远慢于今天的5千兆赫计算机处理器。但是在类似大脑的平行架构内,它能处理很多工作,向各个方向发送信号,让大脑的神经元同时工作。而大脑的10亿个神经元和10万亿个连接(突触)加在一起就能形成强大的计算能力了。

  IBM就希望效仿这一大脑结构创建全新的芯片。

  该研究小组目前已建立起第一批类似人脑的计算单元,由256个神经元,256×256个突触和256个轴突构成。换句话说,它已经拥有了处理器,内存和通信的基本架构。此外这种类人脑结构还有另一个好处,运行功耗低,而且在不使用时还可以实现部分关闭。另外这些新的芯片将不会以传统的方式进行编程。基于它的认知计算机也有望实现学习经验,寻找相关性,建立假设,记住和学习等能力。由于他们模仿大脑的“结构和突触可塑性”,因而处理过程是分布式和平行式的,而非集中和串行式。

  另外这种计算机芯片还能重新创建一种类似大脑中发生在神经元和突触之间的“脉冲”现象。因而其能够处理十分复杂的任务,比如玩Pong游戏。目前已有两个原型芯片被制造出来正在测试。研究人员也即将步入到第2个阶段,创建一个计算机。目标是创建一个不仅能立刻通过多种感官分析复杂信息,而且能动态修正自身,与环境互动和识别周围发生的事情的计算机。


  另外除了玩Pong游戏,IBM的团队还测试过该芯片解决导航,机器视觉,模式识别,联想记忆以及分类等问题的能力。最终,IBM将把该计算单元完全融入到一个完整的硬件和软件的集成系统中去。 Modha说,IBM希望建立一台包含100亿个神经元和100万亿个突触的计算机。这比人类大脑的功都强大10倍以上。另外Modha还预测,完整的系统只会消耗一千瓦的功率,而且将占据不到两升的量(我们大脑的大小)。相比之下,目前最快的IBM超级计算机蓝色基因有147,456处理器,内存容量超过144T,有一个巨大的空调柜那么大,消耗超过2兆瓦的电力。

  对于具体应用方面,IBM说可以使用认知计算机通过传感器网络和微型电机网络不断记录和报告数据如温度,压力,波高,声学和海潮等来监测世界范围内的供水状况。然后,它还可以在发生地震的情况下发出海啸警报。而这样的任务传统计算机根本不可能完成。

  据悉,该项目是用DARPA捐赠的2100万美元创建的,包括六个IBM实验室,四所大学(康乃尔大学,威斯康星大学,加州大学和哥伦比亚大学)以及一些政府研究人员。虽然这个项目比较新,但是IBM自其1956年创建第一台人脑模拟器(512个神经元)以来就一直在从事对类人脑计算机的研究。

  Modha就说:“如果一切顺利,这将不是5%的飞跃。而是一个巨大的飞跃。而且到现在为止我们也已经克服了巨大的能够想象到的困难。”
貌似很强大
不愧是蓝色巨人
模拟人脑真那么容易么?我看未必,下一代计算机的概念炒了那么多年,结果我们还是在大规模集成电路的阶段上
是所谓量子计算机吗?