1991年之前老毛子计算机技术VS MD为首的西方列强

来源:百度文库 编辑:超级军网 时间:2024/04/28 05:51:59


以事实为依据,数据作参考。



一下是发展的主要摘要:

= * = * = * = * 1960 * = * = * = * =[11] Control Data starts development of CDC 6600.

[12] Honeywell introduces Honeywell 800, with hardware support for timesharing between eight programs.

[13] E. V. Yevreinov at the Institute of Mathematics in Novosibirsk (IMN) begins work on tightly-coupled, coarse-grain parallel architectures with programmable interconnects.


= * = * = * = * 1966 * = * = * = * =
[24] Arthur Bernstein introduces Bernstein's Condition for statement independence (the foundation of subsequent work on data dependence analysis).

[25] CDC introduces CDC 6500, containing two CDC 6400 processors. Principal architect is Jim Thornton.

[26] The UNIVAC division of Sperry Rand Corporation delivers the first multiprocessor 1108. Each contains up to 3 CPUs and 2 I/O controllers; its EXEC 8 operating system provides interface for multithread program execution.

[27] Michael Flynn publishes a paper describing the architectural taxonomy which bears his name.

[28] The Minsk-222 completed by E. V. Yevreinov at the Institute of Mathematics, Novosibirsk


= * = * = * = * 1967 * = * = * = * =[29] Karp, Miller and Winograd publish paper describing the use of dependence vectors and loop transformations to analyze data dependencies.

[30] IBM produces the 360/91 (later model 95) with dynamic instruction reordering. 20 of these are produced over the next several years; the line is eventually supplanted by the slower Model 85.

[31] BESM-6, developed at Institute of Precision Mechanics and Computer Technology (ITMVT) in Moscow, goes into production. Machine has 48-bit words, achieves 1 MIPS, and contains virtual memory and a pipelined processor.

[32] Gene Amdahl and Daniel Slotnick have published debate at AFIPS Conference about the feasibility of parallel processing. Amdahl's argument about limits to parallelism becomes known as "Amdahl's Law"; he also propounds a corollary about system balance (sometimes called "Amdahl's Other Law"), which states that a balanced machine has the same number of MIPS, Mbytes, and Mbit/s of I/O bandwidth.


= * = * = * = * 1971 * = * = * = * =
[45] CDC delivers hardwired Cyberplus parallel radar image processing system to Rome Air Development Center, where it achieves 250 times the performance of a CDC 6600.

[46] Intel produces the world's first single-chip CPU, the 4004 microprocessor. 0.06 MIPS capacity


[47] Texas Instruments delivers the first Advanced Scientific Computer (also called Advanced Seismic Computer), containing 4 pipelines with an 80 ns clock time. Vector instructions were memory-to-memory. Seven of these machines are later built, and an aggressive automatic vectorizing FORTRAN compiler is developed for them. It is the first machine to contain SECDED (Single Error Correction, Double Error Detection) memory.

= * = * = * = * 1975 * = * = * = * =

[76] Design of the iAPX 432 symmetric multiprocessor begins at Intel.

= * = * = * = * 1976 * = * = * = * =
[77] The Parafrase compiler system is developed at University of Illinois under the direction of David Kuck. A successor to a program called the Analyzer, Parafrase is used as a testbed for the development of many new ideas on vectorization and program transformation.

[78] Carl Hewitt, at MIT, invents the Actors model, in which control structures are patterns of messages. This model is the basis for much later work on high-level parallel programming models.

[79] Floating Point Systems Inc. delivers its first 38-bit AP-120B array processor. The machine issues multiple pipelined instructions every cycle.

[80] Cray Research delivers the first Freon-cooled CRAY-1 to Los Alamos National Laboratory.

[81] Fujitsu delivers the first FACOM-230 vector processor to the Japanese National Aerospace Laboratory (NAL).

[82] Tandem ships its first NonStop fault-tolerant disjoint-memory machines with 2 to 16 custom processors, dual inter-processor buses, and a message-based operating system. The machines are used primarily for on-line transaction processing.

[83] Work on the PS-2000 multiprocessor begins at the Institute of Control Problems in Moscow (IPU) and the Scientific Research Institute of Control Computers in Severodonetsk, Ukraine (NIIUVM).

[84] Utpal Banerjee's thesis at the University of Illinois formalizes the concept of data dependence, and describes and implements the analysis algorithm named after him.

[85] CDC delivers the Flexible Processor, a programmable signal processing unit with a 48-bit instruction word.

[86] Borroughs delivers the PEPE associative processor.

[87] Floating Point Systems Inc. describes loop wrapping (later called software pipelining), which it uses to program pipelined multiple instruction issue processors.


= * = * = * = * 1980 * = * = * = * =
[114] PFC (Parallel FORTRAN Compiler) developed at Rice University under the direction of Ken Kennedy.

[115] Teradata spun off from Citibank to develop parallel database query processors.

[116] First PS-2000 multiprocessors go into operation in the USSR. Each contains 64 24-bit processing elements on a segmentable bus, with independent addressing in each PE. The machine's total performance is 200 MIPS. Approximately 200 are manufactured between 1981 and 1989.
[117] J. T. Schwartz publishes paper describing and analyzing the ultracomputer model, in which processors are connected by a shuffle/exchange network.

[118] Robin Milner, working at the University of Edinburgh, describes the Calculus of Communicating Systems, a theoretical framework for describing the properties of concurrent systems.

[119] David Padua and David Kuck at the University of Illinois develop the DOACROSS parallel construct to be used as a target in program transformation. The name DOACROSS is due to Robert Kuhn.

[120] DEC develops the KL10 symmetric multiprocessor. Up to three CPUs are supported, but one customer builds a five-CPU system.

[121] First El'brus-1, delivering 12 MIPS, passes testing in the USSR.

[122] Les Valiant describes and analyzes random routing, a method for reducing contention in message-routing networks. The technique is later incorporated into some machines, and is the basis for much work on PRAM emulation.

[123] Burroughs Scientific Processor project cancelled after one sale but before delivery.

= * = * = * = * 1986 * = * = * = * =
[206] Loral Instrumentation delivers the first LDF-100 dataflow computer to the U.S. government. Two more systems are shipped before the project is shut down.

[207] Gul Agha, at the University of Illinois, describes a new form of the Actors model which is the foundation for much later work on fine-grained multicomputer architectures and software.

[208] The CrOS III programming system, Cubix (a file-system handler) and Plotix (a graphics handler) are developed for the Caltech hypercubes. These are later the basis for several widely-used message-passing programming systems.

[209] Scientific Computer Systems delivers first SCS-40, a Cray-compatible minisupercomputer.

[210] First El'brus-2 completed in the USSR. The machine contains 10 processor, and delivers 125 MIPS (94 MFLOPS) peak performance. Approximately 200 machines are later manufactured.
[211] Thinking Machines ships first CM-1 Connection Machine, containing up to 65536 single-bit processors connected in hypercube.

[212] Encore ships its first bus-based Multimax computer, which couples NS32032 processors to Weitek floating-point accelerators.

[213] Dally shows that low-dimensional k-ary n-cubes are more wire-efficient than hypercubes for typical values of network bisection, message length, and module pinout. Dally demonstrates the torus routing chip, the first low-dimensional wormhole routing component.

[214] Kai Li describes system for emulating shared virtual memory on disjoint-memory hardware.

[215] The Universities of Bologna, Padua, Pisa, and Rome, along with CERN and INFN, complete a 4-node QCD machine delivering 250 MFLOPS peak and 60 Mflops sustained performance.

[216] CRAY X-MP with 4 processors achieves 713 MFLOPS (against a peak of 840) on 1000x1000 LINPACK.

[217] Alan Karp offers $100 prize to first person to demonstrate speedup of 200 or more on general purpose parallel processor. Benner, Gustafson, and Montry begin work to win it, and are later awarded the Gordon Bell Prize.

[218] Arvind, Nikhil, and Pingali at MIT propose the I-structure, a parallel array-like structure allowing side effects. This is incorporated into the Id language, and similar constructs soon appear in other high-level languages.

[219] Floating Point Systems introduces its T-series hypercube, which combines Weitek floating-point units with Inmos transputers. A 128-processor system is shipped to Los Alamos.

[220] Active Memory Technology spun off from ICL to develop DAP products.

[221] Kendall Square Research Corporation (KSR) is founded by Henry Burkhardt (a former Data General and Encore founder) and Steve Frank, to build multiprocessor computers.

[222] GE installs a prototype 10-processor programmable bit-slice systolic array called the Warp at CMU.

在苏联还在用BESM计算机时,那是冷战的早期阶段,计算机性能有可能是与西方并驾齐驱的。


在八十年开始,集成电路逐渐流行之后,苏联就开始落后了。之后就一去不复返了。

注:苏联计算机系统发展历程:

BESM-1
BESM-1, originally referred to as simply the BESM or BESM AN ("BESM Akademii Nauk", BESM of the Academy of Sciences), was completed in 1952. Only one BESM-1 machine was built. The machine used approximately 5,000 vacuum tubes. At the time of completion, it was the fastest computer in Europe. The floating point numbers were represented as 39-bit words: 32 bits for the numeric part, 1 bit for sign, and 1 + 5 bits for the exponent. It was capable of representing numbers in the range 10−9 – 1010. BESM-1 had 1024 words of read/write memory using ferrite cores, and 1024 words of read-only memory based on semiconducting diodes. It also had external storage: 4 magnetic tape units of 30,000 words each, and fast magnetic drum storage with a capacity of 5120 words and an access rate of 800 words/second. The computer was capable of performing 8–10 KFlops. The energy consumption was approximately 30 kW, not accounting for the cooling systems.

BESM-2 also used vacuum tubes.

BESM-3M and BESM-4 were built using transistors. Their architecture was similar to that of the M-20 and M-220 series. The word size was 45 bits. 30 BESM-4 machines were built.

EPSILON (a macro language with high level features including strings and lists, developed by Andrey Ershov at Novosibirsk in 1967) was used to implement ALGOL 68 on the M-220. [1]

[edit] BESM-6
The BESM-6 was arguably the most well-known and influential model of the series designed at the Institute of Precision Mechanics and Computer Engineering. The design was completed in 1966. Production started in 1968 and continued for the following 20 years[2].

可见在60年代时,苏联计算机是采用晶体管制造的。

80年代的Elbrus系列才开始采用集成电路。

Like its predecessors, the original BESM-6 was transistor-based (however, the version used in the 1980s as a component of the Elbrus supercomputer was built with integrated circuits). The machine's 48-bit processor ran at 10 MHz clock speed and featured two instruction pipelines, separate for the control and arithmetic units, and a data cache of 16 48-bit words. The system achieved performance of 1 MFlops. The fastest at the time supercomputer, CDC 6600 achieved 3 MFlops utilizing one central and ten peripheral processing units.

1968这个速度还是相当刚刚的。

The system memory was word-addressable using 15-bit addresses. The maximum addressable memory space was thus 32K words (192K bytes). A virtual memory system allowed to expand this up to 128K words (768K bytes).

(不知道当初80年代造出的现代级,搞出个512K,像堵墙,匪夷所思)

The BESM-6 was widely used in USSR in 1970s for various computation and control tasks. A version named 5E26 was used in the control system of the air defense missile complex S-300. During the 1975 Apollo-Soyuz Test Project the processing of Soyuz orbit parameters was accomplished by a BESM-6 based system in 1 minute. The same computation for the Apollo was carried out by the American side in 30 minutes.

关于阿波罗轨道计算的问题

A total of 355 of these machines were built. Production ended in 1987.

As the first Soviet computer with a large for the time installed base, the BESM-6 gathered a dedicated developer community. Over the years several operating systems and compilers for programming languages such as Fortran, Algol and Pascal were developed.[3].

Fortran, Algol and Pascal were developed 。这个比较牛!

A modification of the BESM-6 based on integrated circuits, with 2-3 times higher performance than the original machine, was produced in the 1980 under the name Elbrus-1K2 as a component of the Elbrus supercomputer

BESM-6的改型就是采用集成电路的Elbrus, 在1973年代开始集成电路CPU Elbrus的设计制造 。

主要型号如下:

Elbrus ( is a series of Soviet supercomputer systems developed by Lebedev Institute of Precision Mechanics and Computer Engineering (ITMiVT) since the 1970s. Since 1990s the development continued by MCST (Moscow Center of SPARC Technologies, ru:МЦСТ), a spin-off of the ITMiVT.

Elbrus 1 (1973) was the first Soviet integrated circuit computer,[citation needed] and the first fourth generation Soviet computer, developed by Vsevolod Burtsev. Used tag-based architecture and ALGOL as system language like the Burroughs large systems. It was used by the Defense Ministry. A side development was an update of the 1965 BESM-6 as Elbrus-1K2.

Elbrus 2 (1977) was a 10-processor computer, considered the first Soviet supercomputer, with superscalar RISC processors. Re-implementation of the Elbrus 1 architecture with the fast ECL chips. It was used in the space program, nuclear weapons research, and defense systems.

Elbrus 3 (1986) was a 16-processor computer developed by Boris Babaian. Differing completely from the architecture of both Elbrus 1 and Elbrus 2, it employed VLIW architecture.

Elbrus 2000 or E2K was a vaporware project to implement Elbrus 3 architecture as a microprocessor.

The current SPARC-like systems have been developed from 1996 with the Elbrus-90micro and the company was formed under an agreement with Sun Microsystems in 1997. The company reported in 1998 the development of an innovative EPIC processor dubbed E2K by a team under Boris Babaian.

Elbrus-3M. Single-processor computer. It was used to test the new, VLIW/EPIC (Very Long Instruction Word/Explicitly Parallel Instruction Computing) type processor. This processor is based on MCST/Elbrus E2K (or Elbrus 2000) architecture. The Elbrus processor (300 MHz, power consumption < 5 W) is fabricated with 0.13 micrometre technology. It has 75 millions of transistors and it executes up to 23 instructions per clock cycle. Performance: 23.7 GIPS/2.4 GFLOPS (64 bits), 4.8 GFLOPS (32 bits). This processor is manufactured in Taiwan.

Elbrus-3M1 is the latest computer of MCST/Elbrus. It has two Elbrus processors. It can work in parallel (using high velocity connections) with others Elbrus computers. So, the Elbrus-3M1 could be used to build super computers. According to the results of tests, the peak performance of the "Elbrus-3M1" computer is in the range of 11.6 GFLOPS to 45.2 GFLOPS, depending on the format of data.

Elbrus-3S will be the next computer of MCST/Elbrus, projected 2009. It will have four VLIW/EPIC type Elbrus-S processors (500 MHz, 0.09 micrometre technology, system on a chip).

Microprocessor Elbrus-PF, projected 2011. 65 nm technology, 8 cores VLIW/EPIC processor. With the transition to 45 nm technology, this processor will have a clock frequency of 2 GHz, and a performance of 8 TFLOPS. This processor will be used to build supercomputer with PFLOPS performance.


后面的设计规划和俄罗斯的半导体技术规划非常合拍---引进65nm,45nm技术。

以事实为依据,数据作参考。



一下是发展的主要摘要:

= * = * = * = * 1960 * = * = * = * =[11] Control Data starts development of CDC 6600.

[12] Honeywell introduces Honeywell 800, with hardware support for timesharing between eight programs.

[13] E. V. Yevreinov at the Institute of Mathematics in Novosibirsk (IMN) begins work on tightly-coupled, coarse-grain parallel architectures with programmable interconnects.


= * = * = * = * 1966 * = * = * = * =
[24] Arthur Bernstein introduces Bernstein's Condition for statement independence (the foundation of subsequent work on data dependence analysis).

[25] CDC introduces CDC 6500, containing two CDC 6400 processors. Principal architect is Jim Thornton.

[26] The UNIVAC division of Sperry Rand Corporation delivers the first multiprocessor 1108. Each contains up to 3 CPUs and 2 I/O controllers; its EXEC 8 operating system provides interface for multithread program execution.

[27] Michael Flynn publishes a paper describing the architectural taxonomy which bears his name.

[28] The Minsk-222 completed by E. V. Yevreinov at the Institute of Mathematics, Novosibirsk


= * = * = * = * 1967 * = * = * = * =[29] Karp, Miller and Winograd publish paper describing the use of dependence vectors and loop transformations to analyze data dependencies.

[30] IBM produces the 360/91 (later model 95) with dynamic instruction reordering. 20 of these are produced over the next several years; the line is eventually supplanted by the slower Model 85.

[31] BESM-6, developed at Institute of Precision Mechanics and Computer Technology (ITMVT) in Moscow, goes into production. Machine has 48-bit words, achieves 1 MIPS, and contains virtual memory and a pipelined processor.

[32] Gene Amdahl and Daniel Slotnick have published debate at AFIPS Conference about the feasibility of parallel processing. Amdahl's argument about limits to parallelism becomes known as "Amdahl's Law"; he also propounds a corollary about system balance (sometimes called "Amdahl's Other Law"), which states that a balanced machine has the same number of MIPS, Mbytes, and Mbit/s of I/O bandwidth.


= * = * = * = * 1971 * = * = * = * =
[45] CDC delivers hardwired Cyberplus parallel radar image processing system to Rome Air Development Center, where it achieves 250 times the performance of a CDC 6600.

[46] Intel produces the world's first single-chip CPU, the 4004 microprocessor. 0.06 MIPS capacity


[47] Texas Instruments delivers the first Advanced Scientific Computer (also called Advanced Seismic Computer), containing 4 pipelines with an 80 ns clock time. Vector instructions were memory-to-memory. Seven of these machines are later built, and an aggressive automatic vectorizing FORTRAN compiler is developed for them. It is the first machine to contain SECDED (Single Error Correction, Double Error Detection) memory.

= * = * = * = * 1975 * = * = * = * =

[76] Design of the iAPX 432 symmetric multiprocessor begins at Intel.

= * = * = * = * 1976 * = * = * = * =
[77] The Parafrase compiler system is developed at University of Illinois under the direction of David Kuck. A successor to a program called the Analyzer, Parafrase is used as a testbed for the development of many new ideas on vectorization and program transformation.

[78] Carl Hewitt, at MIT, invents the Actors model, in which control structures are patterns of messages. This model is the basis for much later work on high-level parallel programming models.

[79] Floating Point Systems Inc. delivers its first 38-bit AP-120B array processor. The machine issues multiple pipelined instructions every cycle.

[80] Cray Research delivers the first Freon-cooled CRAY-1 to Los Alamos National Laboratory.

[81] Fujitsu delivers the first FACOM-230 vector processor to the Japanese National Aerospace Laboratory (NAL).

[82] Tandem ships its first NonStop fault-tolerant disjoint-memory machines with 2 to 16 custom processors, dual inter-processor buses, and a message-based operating system. The machines are used primarily for on-line transaction processing.

[83] Work on the PS-2000 multiprocessor begins at the Institute of Control Problems in Moscow (IPU) and the Scientific Research Institute of Control Computers in Severodonetsk, Ukraine (NIIUVM).

[84] Utpal Banerjee's thesis at the University of Illinois formalizes the concept of data dependence, and describes and implements the analysis algorithm named after him.

[85] CDC delivers the Flexible Processor, a programmable signal processing unit with a 48-bit instruction word.

[86] Borroughs delivers the PEPE associative processor.

[87] Floating Point Systems Inc. describes loop wrapping (later called software pipelining), which it uses to program pipelined multiple instruction issue processors.


= * = * = * = * 1980 * = * = * = * =
[114] PFC (Parallel FORTRAN Compiler) developed at Rice University under the direction of Ken Kennedy.

[115] Teradata spun off from Citibank to develop parallel database query processors.

[116] First PS-2000 multiprocessors go into operation in the USSR. Each contains 64 24-bit processing elements on a segmentable bus, with independent addressing in each PE. The machine's total performance is 200 MIPS. Approximately 200 are manufactured between 1981 and 1989.
[117] J. T. Schwartz publishes paper describing and analyzing the ultracomputer model, in which processors are connected by a shuffle/exchange network.

[118] Robin Milner, working at the University of Edinburgh, describes the Calculus of Communicating Systems, a theoretical framework for describing the properties of concurrent systems.

[119] David Padua and David Kuck at the University of Illinois develop the DOACROSS parallel construct to be used as a target in program transformation. The name DOACROSS is due to Robert Kuhn.

[120] DEC develops the KL10 symmetric multiprocessor. Up to three CPUs are supported, but one customer builds a five-CPU system.

[121] First El'brus-1, delivering 12 MIPS, passes testing in the USSR.

[122] Les Valiant describes and analyzes random routing, a method for reducing contention in message-routing networks. The technique is later incorporated into some machines, and is the basis for much work on PRAM emulation.

[123] Burroughs Scientific Processor project cancelled after one sale but before delivery.

= * = * = * = * 1986 * = * = * = * =
[206] Loral Instrumentation delivers the first LDF-100 dataflow computer to the U.S. government. Two more systems are shipped before the project is shut down.

[207] Gul Agha, at the University of Illinois, describes a new form of the Actors model which is the foundation for much later work on fine-grained multicomputer architectures and software.

[208] The CrOS III programming system, Cubix (a file-system handler) and Plotix (a graphics handler) are developed for the Caltech hypercubes. These are later the basis for several widely-used message-passing programming systems.

[209] Scientific Computer Systems delivers first SCS-40, a Cray-compatible minisupercomputer.

[210] First El'brus-2 completed in the USSR. The machine contains 10 processor, and delivers 125 MIPS (94 MFLOPS) peak performance. Approximately 200 machines are later manufactured.
[211] Thinking Machines ships first CM-1 Connection Machine, containing up to 65536 single-bit processors connected in hypercube.

[212] Encore ships its first bus-based Multimax computer, which couples NS32032 processors to Weitek floating-point accelerators.

[213] Dally shows that low-dimensional k-ary n-cubes are more wire-efficient than hypercubes for typical values of network bisection, message length, and module pinout. Dally demonstrates the torus routing chip, the first low-dimensional wormhole routing component.

[214] Kai Li describes system for emulating shared virtual memory on disjoint-memory hardware.

[215] The Universities of Bologna, Padua, Pisa, and Rome, along with CERN and INFN, complete a 4-node QCD machine delivering 250 MFLOPS peak and 60 Mflops sustained performance.

[216] CRAY X-MP with 4 processors achieves 713 MFLOPS (against a peak of 840) on 1000x1000 LINPACK.

[217] Alan Karp offers $100 prize to first person to demonstrate speedup of 200 or more on general purpose parallel processor. Benner, Gustafson, and Montry begin work to win it, and are later awarded the Gordon Bell Prize.

[218] Arvind, Nikhil, and Pingali at MIT propose the I-structure, a parallel array-like structure allowing side effects. This is incorporated into the Id language, and similar constructs soon appear in other high-level languages.

[219] Floating Point Systems introduces its T-series hypercube, which combines Weitek floating-point units with Inmos transputers. A 128-processor system is shipped to Los Alamos.

[220] Active Memory Technology spun off from ICL to develop DAP products.

[221] Kendall Square Research Corporation (KSR) is founded by Henry Burkhardt (a former Data General and Encore founder) and Steve Frank, to build multiprocessor computers.

[222] GE installs a prototype 10-processor programmable bit-slice systolic array called the Warp at CMU.

在苏联还在用BESM计算机时,那是冷战的早期阶段,计算机性能有可能是与西方并驾齐驱的。


在八十年开始,集成电路逐渐流行之后,苏联就开始落后了。之后就一去不复返了。

注:苏联计算机系统发展历程:

BESM-1
BESM-1, originally referred to as simply the BESM or BESM AN ("BESM Akademii Nauk", BESM of the Academy of Sciences), was completed in 1952. Only one BESM-1 machine was built. The machine used approximately 5,000 vacuum tubes. At the time of completion, it was the fastest computer in Europe. The floating point numbers were represented as 39-bit words: 32 bits for the numeric part, 1 bit for sign, and 1 + 5 bits for the exponent. It was capable of representing numbers in the range 10−9 – 1010. BESM-1 had 1024 words of read/write memory using ferrite cores, and 1024 words of read-only memory based on semiconducting diodes. It also had external storage: 4 magnetic tape units of 30,000 words each, and fast magnetic drum storage with a capacity of 5120 words and an access rate of 800 words/second. The computer was capable of performing 8–10 KFlops. The energy consumption was approximately 30 kW, not accounting for the cooling systems.

BESM-2 also used vacuum tubes.

BESM-3M and BESM-4 were built using transistors. Their architecture was similar to that of the M-20 and M-220 series. The word size was 45 bits. 30 BESM-4 machines were built.

EPSILON (a macro language with high level features including strings and lists, developed by Andrey Ershov at Novosibirsk in 1967) was used to implement ALGOL 68 on the M-220. [1]

[edit] BESM-6
The BESM-6 was arguably the most well-known and influential model of the series designed at the Institute of Precision Mechanics and Computer Engineering. The design was completed in 1966. Production started in 1968 and continued for the following 20 years[2].

可见在60年代时,苏联计算机是采用晶体管制造的。

80年代的Elbrus系列才开始采用集成电路。

Like its predecessors, the original BESM-6 was transistor-based (however, the version used in the 1980s as a component of the Elbrus supercomputer was built with integrated circuits). The machine's 48-bit processor ran at 10 MHz clock speed and featured two instruction pipelines, separate for the control and arithmetic units, and a data cache of 16 48-bit words. The system achieved performance of 1 MFlops. The fastest at the time supercomputer, CDC 6600 achieved 3 MFlops utilizing one central and ten peripheral processing units.

1968这个速度还是相当刚刚的。

The system memory was word-addressable using 15-bit addresses. The maximum addressable memory space was thus 32K words (192K bytes). A virtual memory system allowed to expand this up to 128K words (768K bytes).

(不知道当初80年代造出的现代级,搞出个512K,像堵墙,匪夷所思)

The BESM-6 was widely used in USSR in 1970s for various computation and control tasks. A version named 5E26 was used in the control system of the air defense missile complex S-300. During the 1975 Apollo-Soyuz Test Project the processing of Soyuz orbit parameters was accomplished by a BESM-6 based system in 1 minute. The same computation for the Apollo was carried out by the American side in 30 minutes.

关于阿波罗轨道计算的问题

A total of 355 of these machines were built. Production ended in 1987.

As the first Soviet computer with a large for the time installed base, the BESM-6 gathered a dedicated developer community. Over the years several operating systems and compilers for programming languages such as Fortran, Algol and Pascal were developed.[3].

Fortran, Algol and Pascal were developed 。这个比较牛!

A modification of the BESM-6 based on integrated circuits, with 2-3 times higher performance than the original machine, was produced in the 1980 under the name Elbrus-1K2 as a component of the Elbrus supercomputer

BESM-6的改型就是采用集成电路的Elbrus, 在1973年代开始集成电路CPU Elbrus的设计制造 。

主要型号如下:

Elbrus ( is a series of Soviet supercomputer systems developed by Lebedev Institute of Precision Mechanics and Computer Engineering (ITMiVT) since the 1970s. Since 1990s the development continued by MCST (Moscow Center of SPARC Technologies, ru:МЦСТ), a spin-off of the ITMiVT.

Elbrus 1 (1973) was the first Soviet integrated circuit computer,[citation needed] and the first fourth generation Soviet computer, developed by Vsevolod Burtsev. Used tag-based architecture and ALGOL as system language like the Burroughs large systems. It was used by the Defense Ministry. A side development was an update of the 1965 BESM-6 as Elbrus-1K2.

Elbrus 2 (1977) was a 10-processor computer, considered the first Soviet supercomputer, with superscalar RISC processors. Re-implementation of the Elbrus 1 architecture with the fast ECL chips. It was used in the space program, nuclear weapons research, and defense systems.

Elbrus 3 (1986) was a 16-processor computer developed by Boris Babaian. Differing completely from the architecture of both Elbrus 1 and Elbrus 2, it employed VLIW architecture.

Elbrus 2000 or E2K was a vaporware project to implement Elbrus 3 architecture as a microprocessor.

The current SPARC-like systems have been developed from 1996 with the Elbrus-90micro and the company was formed under an agreement with Sun Microsystems in 1997. The company reported in 1998 the development of an innovative EPIC processor dubbed E2K by a team under Boris Babaian.

Elbrus-3M. Single-processor computer. It was used to test the new, VLIW/EPIC (Very Long Instruction Word/Explicitly Parallel Instruction Computing) type processor. This processor is based on MCST/Elbrus E2K (or Elbrus 2000) architecture. The Elbrus processor (300 MHz, power consumption < 5 W) is fabricated with 0.13 micrometre technology. It has 75 millions of transistors and it executes up to 23 instructions per clock cycle. Performance: 23.7 GIPS/2.4 GFLOPS (64 bits), 4.8 GFLOPS (32 bits). This processor is manufactured in Taiwan.

Elbrus-3M1 is the latest computer of MCST/Elbrus. It has two Elbrus processors. It can work in parallel (using high velocity connections) with others Elbrus computers. So, the Elbrus-3M1 could be used to build super computers. According to the results of tests, the peak performance of the "Elbrus-3M1" computer is in the range of 11.6 GFLOPS to 45.2 GFLOPS, depending on the format of data.

Elbrus-3S will be the next computer of MCST/Elbrus, projected 2009. It will have four VLIW/EPIC type Elbrus-S processors (500 MHz, 0.09 micrometre technology, system on a chip).

Microprocessor Elbrus-PF, projected 2011. 65 nm technology, 8 cores VLIW/EPIC processor. With the transition to 45 nm technology, this processor will have a clock frequency of 2 GHz, and a performance of 8 TFLOPS. This processor will be used to build supercomputer with PFLOPS performance.


后面的设计规划和俄罗斯的半导体技术规划非常合拍---引进65nm,45nm技术。
有说法说毛子觉得大规模集成电路在电磁脉冲攻击下生存力比不上真空管,所以一直就没怎么发展大规模集成电路,导致了电子技术越来越落后。不知这种说法有没有道理?
国内第一台电子计算机,第一只晶体管,是苏联技术上帮助搞的
第一支 显像管 第一支 荧光灯管
不知道当初80年代造出的现代级,搞出个512K,像堵墙,匪夷所思

谁告诉你512K像堵墙??!!——那是冷却装置,不是存储本身好不好!!
1# JSTCVW09CD
这里列的很多性能指标都没有什么参考价值,比如这个:
The maximum addressable memory space was thus 32K words (192K bytes). A virtual memory system allowed to expand this up to 128K words (768K bytes).
最大的可寻址空间是192k bytes,虚拟内存空间最大可以达到768kbytes,根本没说实际配备了多大的存储器。比如80386可以寻址4G bytes的内存,但是大部分销售的80386机器只带有2-4Mbytes的内存。
拿4004的性能来说事也很没有代表性,因为那是一个微处理器,当时的处理器都是很多个芯片组合起来的,4004这种方式独此一家,但是性能并不是它的所长,直到80年代,cray的大型机处理器都不是单个芯片的。
有些东西是以讹传讹,比如计算机老毛子落后于美国是真的,明显的例子是:苏联在同期研发的大型机叫ES EVM(ЕС ЭВМ),是仿照IBM System/360设计的。这一型号的计算机于1972年投产。其操作系统OS ES也是仿IBM OS/360的。  

但是没有任何证据表明苏联的计算机落后于其他西方国家,尤其是专用计算机,门类相当齐全,这一点很重要。
举两个例子:
日本的计算机也很强,但是海自各代的所有舰载计算机系统全部是进口美国的,所以平均要比美国的同类系统落后5年左右,而且系统的升级改进受制于美国,至今不具备超远距打击能力——就是没有办法发射战斧式舰对舰/巡航导弹。而老毛子的舰载系统很早就具备协调飞机,卫星和舰队本身的远程导弹的打击能力,后期系统还具备指挥舰载机的能力。

还有一个就是英国开发猎迷式预警机,由于完全是自行开发,最后失败的重要原因就是机载处理器和存储始终无法达标,最后只好放弃。但是苏联在开发预警机的时侯从未被过此类问题挡过路(顺便说一句,英国人在自行升级外购的“支怒干”直升机的机载系统时也遇到很大的麻烦,浪费了8年,还是要乞求波音解决)
zqyuan 发表于 2009-8-29 22:32

看到以前的帖子里说,“八十年年代的现代级里面的的里面内存512KB,做的跟堵墙“ 同时”别人还怀疑是“真空管做的”。

当时很疑问,觉得不可信。

60年代开始用晶体管了,73年用集成电路。 怎么可能是堵墙那么大体积?
During the 1975 Apollo-Soyuz Test Project the processing of Soyuz orbit parameters was accomplished by a BESM-6 based system in 1 minute. The same computation for the Apollo was carried out by the American side in 30 minutes.

这里似乎暗示, BESM-6在当时的实用性超级计算机里是占了头的,把MD比下去了。
8# JSTCVW09CD

也许是磁芯存储器,512k的磁芯存储器的确像堵墙。
JSTCVW09CD 发表于 2009-8-29 23:01
阿波罗是69年,比75年早了不少,而且装在地面车辆上的计算机和装在飞船里的计算机,体积和重量都没有可比性。

阿波罗是69年,比75年早了不少,而且装在地面车辆上的计算机和装在飞船里的计算机,体积和重量都没有可比性。
TripleX 发表于 2009-8-29 23:08


不是69年那次,是这一次:

1975年的一次共同合作项目。他们在共同计算运行轨道的时候,老毛子一分钟搞定, MD用了30分钟。但是记载了下来。

The Apollo-Soyuz Test Project (ASTP) (Russian: Экспериментальный полёт &laquo;Аполлон&raquo; — &laquo;Союз&raquo;) (Eksperimantalniy polyot Apollon-Soyuz) was the last mission in the Apollo program and was the first joint flight of the U.S. and Soviet space programs. The mission took place in July 1975. For the United States of America, it was the last Apollo flight, as well as the last manned space launch until the flight of the first Space Shuttle in April 1981.
阿波罗是69年,比75年早了不少,而且装在地面车辆上的计算机和装在飞船里的计算机,体积和重量都没有可比性。
TripleX 发表于 2009-8-29 23:08


不是69年那次,是这一次:

1975年的一次共同合作项目。他们在共同计算运行轨道的时候,老毛子一分钟搞定, MD用了30分钟。但是记载了下来。

The Apollo-Soyuz Test Project (ASTP) (Russian: Экспериментальный полёт &laquo;Аполлон&raquo; — &laquo;Союз&raquo;) (Eksperimantalniy polyot Apollon-Soyuz) was the last mission in the Apollo program and was the first joint flight of the U.S. and Soviet space programs. The mission took place in July 1975. For the United States of America, it was the last Apollo flight, as well as the last manned space launch until the flight of the first Space Shuttle in April 1981.


现在硕果仅存,就剩下 Elbrus处理器及其计算机系统了。

现在硕果仅存,就剩下 Elbrus处理器及其计算机系统了。
Elbrus处理器,应该好好推广一下,可以做成兼容通用CPU的CPU
JSTCVW09CD 发表于 2009-8-29 23:15
不是69年那次,是这一次:

1975年的一次共同合作项目。他们在共同计算运行轨道的时候,老毛子一分钟 ...
没听过你说的1min,30分钟的故事。
退一万步,就算有,也不能说明什么问题。这里面涉及算法问题。同样一个小的集成电路胜在功耗体积上面,一个超级庞大的电子管系统不一定慢过集成电路。
而且西方的亿次计算机不可能慢过老毛子,所以你的这例子估计有问题。
百度了下超级计算机:

1960年代,超级计算机由希穆尔·克雷(Seymour Cray)在Control Data Corporation裡设计出来并领先市场直到1970年代克雷创立自己的公司「克雷研究」(Cray Research)。凭着他的新设计,他控制了整个超级计算机市场。
 自1976年美国克雷公司推出了世界上首台运算速度达每秒2.5亿次的超级计算机以来,突出表现一国科技实力的超级计算机,堪称集万千宠爱于一身的高科技宠儿,在诸如天气预报、生命科学的基因分析、核工业、军事、航天等高科技领域大展身手,让各国科技精英竞折腰,目前,各国都在着手研发亿亿级超级计算机。
看不懂。。。。
我听说一个故事,80年代中期苏联计算机专家来我国学术访问,看到WAX小型机激动得不行,他们国内当时没有同类型的计算机
我觉得老毛子从一开始路就走偏了。
    美帝通常在新构架应用后马上由IEEE 等机构制定通用的电气标准,一方面尽量与旧构架保持兼容,另一方面使周边设备都以这个标准的总线为中心拓展。 从ISA总线到PCI总线 莫不是如此。 在总线接口确定之后,无论是CPU还是存储器厂商都要以这个为基础设计,成为标准工业总线的一部分,并以自已的特长推动这个标准不断升级更新。
   而毛子从来就没有一套完整延续的总线乃至工业标准,各个部所各搭自已的戏台,看起来热闹,实则效益低下。 兔子强就强在 从白板一块起家,老老实实地学了美帝。
毛子的巨型机的技术还是很强的  只是解体以后就不知道了
armfans 发表于 2012-2-18 20:41
Elbrus处理器,应该好好推广一下,可以做成兼容通用CPU的CPU
几年的坟都挖出来了。 等待版主手抖吧你。
piginfly 发表于 2012-2-18 22:59
我觉得老毛子从一开始路就走偏了。
    美帝通常在新构架应用后马上由IEEE 等机构制定通用的电气标准,一 ...
这种说法有待商榷,毛子的CPU是自有独立架构,完全自有的知识产权,我国的CPU则需要受到MIPS的限制,是MIPS授权使用的IP核,只能跟着别人的步伐进步,无法摆脱束缚

piginfly 发表于 2012-2-18 22:59
我觉得老毛子从一开始路就走偏了。
    美帝通常在新构架应用后马上由IEEE 等机构制定通用的电气标准,一 ...


这种说法有待商榷,毛子的CPU是自有独立架构,完全自有的知识产权,我国的CPU则需要受到MIPS的限制,是MIPS授权使用的IP核,只能跟着别人的步伐进步,无法摆脱束缚;
但是毛子所有厂商没有制定统一标准确实是个大问题,严重阻碍了市场推广和发展
piginfly 发表于 2012-2-18 22:59
我觉得老毛子从一开始路就走偏了。
    美帝通常在新构架应用后马上由IEEE 等机构制定通用的电气标准,一 ...


这种说法有待商榷,毛子的CPU是自有独立架构,完全自有的知识产权,我国的CPU则需要受到MIPS的限制,是MIPS授权使用的IP核,只能跟着别人的步伐进步,无法摆脱束缚;
但是毛子所有厂商没有制定统一标准确实是个大问题,严重阻碍了市场推广和发展

armfans 发表于 2012-2-19 10:06
这种说法有待商榷,毛子的CPU是自有独立架构,完全自有的知识产权,我国的CPU则需要受到MIPS的限制,是 ...

让专家给你科普一下CPU硬核受授权,软核授权和架构授权三者的重大区别吧http://bbs.lemote.com/viewthread.php?tid=70168&highlight=
armfans 发表于 2012-2-19 10:06
这种说法有待商榷,毛子的CPU是自有独立架构,完全自有的知识产权,我国的CPU则需要受到MIPS的限制,是 ...

让专家给你科普一下CPU硬核受授权,软核授权和架构授权三者的重大区别吧http://bbs.lemote.com/viewthread.php?tid=70168&highlight=
之前见过道尔M1的主控机柜,算是苏联80年代中水准,体积是几乎一个立方,没有CPU概念,还处在门电路搭建的功能模块阶段(苏联的门电路和TTL差很远,独树一帜,但是体积都很大);与8087功能相似的数学处理器是两块板,512k的存储器是四块板,板的大小和工业1U板差不多;印刷电路板很初级,大部分是背部飞线;板之间通讯没有总线.说到这里大家应该知道是啥水平了。
Heineken 发表于 2012-2-19 10:38
之前见过道尔M1的主控机柜,算是苏联80年代中水准,体积是几乎一个立方,没有CPU概念,还处在门电路搭建的功 ...

模拟核爆辐射一下看看如何就知道了。

抗辐射的指标在那里摆着。
JSTCVW09CD 发表于 2012-2-19 10:46
模拟核爆辐射一下看看如何就知道了。

抗辐射的指标在那里摆着。
目前俄国有航天级的CPU吧?性能如何?
hswz 发表于 2012-2-19 10:52
目前俄国有航天级的CPU吧?性能如何?
乃可以了解一下新改造过得联盟飞船。

上面的代表2000年前后的中低端水准。
-
168578812.jpg
JSTCVW09CD 发表于 2009-8-29 23:52
现在硕果仅存,就剩下 Elbrus处理器及其计算机系统了。
硬件在80年代后确实落后了,但是近年来毛子在软件领域的发展还是比较快的,比如著名的卡巴斯基就是一例。
据说英特尔七成工程师是俄国人,英特尔内部语言是俄语。只是听说哈。
JSTCVW09CD 发表于 2012-2-19 10:46
模拟核爆辐射一下看看如何就知道了。

抗辐射的指标在那里摆着。
干了10年电路设计的俺跪求毛子的门电路/线路板/机柜是如何抗核辐射的,忽然间我觉得自己被穿越到了某个毛子无敌的界面了
乱坟岗算是走不出去了。。。
Heineken 发表于 2012-2-19 12:44
干了10年电路设计的俺跪求毛子的门电路/线路板/机柜是如何抗核辐射的,忽然间我觉得自己被穿越到了某个毛 ...
据说,那个核大战背景下的电路设计都是核辐射加固处理过的。 具体细节不知道。

又不是金刚石-安泰集团的。
xplayer 发表于 2012-2-18 21:21
 自1976年美国克雷公司推出了世界上首台运算速度达每秒2.5亿次的超级计算机以来,突出表现一国科技实力的超 ...
人家比的是抗宇宙射线的星载计算机,你举克雷的例子是想显示NASA无知呢还是你?
JSTCVW09CD 发表于 2012-2-19 12:52
据说,那个核大战背景下的电路设计都是核辐射加固处理过的。 具体细节不知道。

又不是金刚石-安泰集团 ...
防核辐射比较靠谱的是用一定厚度的重金属外壳,比如铅。我可以负责任的说,这样的东西在我见过的道尔和C-300上从来没见过。
Heineken 发表于 2012-2-19 10:38
之前见过道尔M1的主控机柜,算是苏联80年代中水准,体积是几乎一个立方,没有CPU概念,还处在门电路搭建的功 ...
这么一说似乎相当的杯具呀,难得的是道尔的性能似乎不错呀。
armfans 发表于 2012-2-19 10:06
这种说法有待商榷,毛子的CPU是自有独立架构,完全自有的知识产权,我国的CPU则需要受到MIPS的限制,是 ...
  计算机是以CPU为核心没错。但计算机系统却是以总线为基础。 总线即是一种开放、透明、非排他性的行业标准。各厂商往往是根据总线的结构和性能来设计芯片,芯片的运行服从于总线接口,而不是设计出一个性能优越却独立于总线的芯片。  
   苹果当年就是败在什么都要自个搞,搞到大家的东西都装不上APPLE机,结果就是APPLE败得一踏糊涂。 反过来看,IBM开放了自已的结架,让大家一起来玩,结果就是IBM仍然是蓝色巨人,还外带着让INTEL、AMD、德州仪器、MOTOROLA吃个饱,还让NEC,威盛,SIS之类打酱油的角色喝了口汤。
   所以说,总线至关重要,无论军用也好,行业工控和家用也罢,基本上都是围绕ISA、PCI的加减搭配而已。简单地说,标准化促进了社会分工,使产品更具竞争力。产品的技术进步又促进标准不断升级,从而达到良性发展。 毛子从来就没有什么标准化的概念。
稻城枫叶 发表于 2012-2-19 11:19
硬件在80年代后确实落后了,但是近年来毛子在软件领域的发展还是比较快的,比如著名的卡巴斯基就是一例。
kaparsky无非是个基于windows 内核驱动的 比较高级一些C应用而已。 我用过的杀毒软件,数他占内存多最慢。上十八禁网站多的话,哪个防毒软件都扯蛋。
paini 发表于 2012-2-19 13:33
人家比的是抗宇宙射线的星载计算机,你举克雷的例子是想显示NASA无知呢还是你?
那聪明如火星人的你给我找找那个1min,30min的出处?您看清我说什么了吗?