Sunday, January 31, 2010

Sewing Pakestanicolth

Japan and the fifth generation of computers

GNU Project to speak at the previous entry in this blog, we today to discuss an issue that once I was rather curious: the fifth generation of computers.

But as always, to understand this history is necessary before a little soaking some concepts and history. A computer is just that, ultimately, an information processing machine, with the characteristic that is programmable, and therefore versatile and powerful, while certainly the parents of modern computing certainly not they went through your mind that something like Facebook could even exist.

However, computers have evolved over time and revolutionized every X time. While the first calculating machines date back centuries, and even Charles Babbage designed and conceptually similar to what is a computer, not until the 40's when they finally are built first electronic computers, with examples such as the ENIAC (U.S. made in the image) or the Mark I (UK).

In any case, this is not a history of paleontology computer, so we focus a little on the subject. Basically, there are known and recognized four main generations of computers.

The first generation of computers based on the technology of vacuum tubes. They are the first electronic computers, large and giant monsters weighing many tons needed entire rooms chilled and exorbitant power consumption. In this generation we out machines like the UNIVAC (pictured) or the IBM 701.

The second generation of computers is based on transistors. With this new technology using silicon semiconductor materials such as computers could be more powerful and economical, but also a smaller size and lower power consumption. In this second generation, we highlight IBM machines such as the Honeywell 7090 or 800.

The third generation makes use of integrated circuits as a major technological innovation over the previous generation. Again, thanks to this technology is getting cheaper the product, reduce its size (it had computers that could go into a closet) and consumption. The number of computer manufacturers has increased after the development of the minicomputer, PC much more limited than the large mainframes but allowed many companies and schools for themselves have a computer with which to work. Regarding examples of machines that can stand out of this generation are the IBM 360 computer family, the CDC 6600 (as already mentioned , the first supercomputer in history) or the DEC PDP-1 (pictured).

The fourth generation is the generation of microprocessor and miniaturization. This is the generation that know the most about today's computers because basically we have are an evolution of those early 70's models. After the invention of Intel microprocessor with Intel 4004 in 1971 (pictured), over the years the computer world suffered an explosion of colossal proportions in which the computers were popularized and finally reached the public, first as curiosity, then as a toy, then as a working tool and finally as a life centered on microprocessor controlled machines. Machine that could highlight of this generation would be the MITS Altair 8800, Apple II, IBM PC or Apple Macintosh, to name some major milestones.

However, if you look at all the computers and companies mentioned, they were all American. Of course there were more manufacturers in other countries, such Bull in France and Siemens in Germany, but the Anglo-Saxon dominated widely.

Japan was, until then, just a "replicant" English or American technology to the 70. However, following the huge success of the microelectronics industry and automobile consumption, the next target for the Japanese was clear: to lead the next revolution in computers. That is why it was created during the 80 draft fifth generation of computers.

This project was funded by the Ministry of Economy, Trade and Industry (MITI) and developed by the Centre for Development and Japanese Information Processing (JIPDEC) and the main idea behind these machines is based technologies and techniques used in artificial intelligence.

Looking at the wikipedia , the main fields for research of the project were:
  • Technologies for knowledge process. Technologies for processing
  • databases and massive knowledge bases. Workstations
  • high performance.
  • Distributed Computing
  • functional.
  • Supercomputer for scientific computing.
Japan At that time there lived a pretty sweet moment. Had already advanced to most Western industrialized countries, growth was highest in the world since the end of World War II and had an aura of invincibility that put efficiency and nerve to the other industrialized powers. It is for this reason that in many Western countries (USA, UK and some European countries) launched their own side projects with principles similar to those of the fifth generation Japanese to try to counter Japan's initiative.

However, after a very large amount of money invested and 11 years of development, Japan takes for completion of the project in 1993. However, the results were not at all expected. We designed a series of technologies, such as operating system Simper (later rewritten and renamed PIMOS) KL1 programming language or development of five Parallel Inference Machine (PIM), which we can see an example of a the five computers in the photo.

But the problem with these machines is that, although interesting from a purely academic standpoint, not so from a practical standpoint, as a machine with a microprocessor of a general can do the same things, with a lower price and even better performance even in the same field of artificial intelligence. Furthermore, possible improvements at the architectural level are generally very difficult to implement in other systems, because we are talking about machines that do not even follow the von Neumann architecture .

is why that, although Japan has not considered the project as a failure, not much talk about the success of it. However, as I said a friend of mine, who failed not achieved no goals but who does not even try, so that despite the apparent waste of resources and money by Japanese industry in a research project with so few positive results, it is always commendable and admirable for a country decides to engage in this kind of project.

Saturday, January 16, 2010

Brother Mfc-490 Printer Offline

RMS: the last of the true hackers

had long wanted to talk about today. Not surprisingly, in my college years, I had some (good) teachers who were very committed to the free software movement and ended up greatly influencing my view of the world of computing in general and software in particular.

A little to know a little computer world certainly have heard of GNU. Perhaps you've never known exactly what it is, or think that GNU and Linux are the same thing, or perhaps you are an expert in philosophy and free software licenses and ethical hacker. However, do not pretend to give an explanation about what is GNU but what are his roots, who founded the project and why he did it.

The first question is easy to answer. Richard M. Stallman (henceforth, RMS, as he himself likes to be called), which you can see in the picture. GNU was founded in 1983 by RMS, but to understand the real reasons we must go back a long time ago.

several universities in USA are well known and recognized worldwide. So to the eye, comes to head Berkley University and MIT. Within this second school, and distant from the 50's, began to see a generation of young talent for the computer and generally quite small for relationships. Hackers were the first recognized as such, the first generation.

These young people shared a common set of values \u200b\u200band even utopian revolutionaries, where each person was measured by their competence with computers, so that he could do for them and not their sex, race or any traditional measure. This group of young people has evolved over time just as the technology that could work and the new generations of hackers who were arriving.

could say that any hacker philosophy of the time was represented by the Artificial Intelligence Laboratory at MIT. RMS reached the laboratory in 1971 from the hand of Noftsker Russ, who was hired as a systems programmer, work that combined it with his physics studies.

Here, RMS is the hacker ethic soaked into a particular community member active. However, the time kept moving and the idyllic AI lab was starting to stop being so idyllic.

Some influences may not seem particularly important. Until the 70 access to laboratory systems was free and without any bureaucratic hurdle. It just comes, you sit at a terminal, and had access to the same resources as everyone else. Exactly the same, including hardware, printers and even files or programs, since the concept of privacy does not exist in the laboratory. Anyone could see your files, anyone could copy your files, anyone could delete your files. But nobody did. It was a community that shared and worked for the common good and common good is not devoted to destroying the work of others.

However, although this way of thinking and working was accepted into the laboratory, it was the thing changed. MIT was actually getting off the threat of the machines at the AI \u200b\u200bLab and ARPA network that anyone could connect to one of these machines and access the network and, therefore, possible military secrets.
privacy
So was the AI \u200b\u200bLab mode user accounts with passwords. However, RMS always fought for what he could, for example with the first deployment of encrypted passwords, which got cracked and all users sent a message saying what his password and it would be better to leave her white it is much easier to write, is known to all and comes to have the same security as the chosen key.

With the encryption system upgrade, RMS would have it harder when the keys cracking, but found that changing a little getting login program greeted with the same password each time a user is authenticated, so that ultimately end up on passwords as well. Moreover, giving much evidence of its disagreement with the topic of passwords, decided that the emacs text editor could not be installed on machines that used a password system. Why

RMS was so against the keys? Again, must understand the philosophy behind this movement. The MIT AI Lab was characterized as a cooperative world where everyone works for the common good. Set passwords and add privacy was making it difficult for the sharing of knowledge. For RMS, for hackers at MIT, this was the way it should be. If your job because you got some results or information that someone may need to make or improve theirs, would not it be better than all that information was always available for everyone? This is how it worked in the laboratory of MIT. And if someone argued that this could not work, anyone could sabotage your job or whatever, he could teach the laboratory. It is possible, there is an example. But to add privacy, security, bureaucracy ... Utopia broke.

However, adding passwords to user accounts was not by far the greatest threat to the hacker ethic that he loved RMS. For the 70 by the MIT Laboratory had already passed, so to speak, two generations of hackers and were entering the first "remittances" in the third. The first generation was that of the 50's, people who "grew up" with vacuum tube machines. The second generation came in the 60's and were the hackers of the systems timeshare. These two generations (especially the first) and was "doing more" with responsibilities of such a family, a job, a mortgage or pay rent and stuff.

The third generation of hackers was different. The 70 brought a different philosophy, new paradigms, and without the support of more experienced hackers, new ethical. Just for get an idea, think of the book titles and Linux Torlvalds RMS, ie "Free As In Freedom" the first and "Just fon fun" the second to understand some of the difference Linus would even years later. For new generations the concept of Copyright was not an aberration nonsense. In the laboratory and was not all share and work for the common good, and not as the private interest was born in the laboratory in the form of a new company, but the same lab was involved in a trade war between two their offspring.

in this laboratory was the birthplace of the LISP programming language. Since it is the world's most popular language, LISP comes from LISt Processor, and given its characteristics, and considering where it was designed, was regarded as the programming language of the field of artificial intelligence.

While doing a new implementation of LISP was not particularly complicated, do a deployment conditions and was another story. Precisely for this reason, two of the lab hackers decided to create a company to manufacture and sell machines LISP. Initially founded in 1979, Richard Greenblatt LMI, an acronym for LISP Machines, Inc , following a bit of ethics and values \u200b\u200bof the MIT hackers. Here you can see an image of a machine developed by this company.

Later, Russ Noftsker, the same hired in 1971 to RMS at MIT, founded Symbolics , the main task of this company ... accurate, manufacture and sell machines LISP. Both companies, therefore, both companies were competing and also had dealings with MIT. And, of course, thrived both the largest pool of experts in LISP which had at that time, namely the MIT AI Lab.

Symbolics (which you can see an example of his machine in the picture) had a more entrepreneurial business, with a more developed marketing and a less than ethical business practices from the standpoint of hacker LMI but yet, perhaps because of it, the success of the far more attracted hackers LMI laboratory. And because of that, Symbolics became the most representative RMS of all that was going wrong in his laboratory.

Since the MIT had agreements with two companies, Symbolics was not much for the work to open their programs because, they claimed, that could mean working for the competition. Therefore, no longer provide the source code of their programs. Although

RMS did not work for either company, he did what he was doing ethical Symbolics, so he decided to act, and every time I got the binary of a program as compared with the previous version and from reverse engineer saw there what he was doing the program again, will implement it and passed it LMI. It is possible, as shown in the English Wikipedia, that the reason was because they wanted to have a company with sufficient advantage to have a monopoly on the LISP machines. Perhaps, as Steven Levy says in his book Hackers , is a form of punishment to the unethical practices of a hacker by Symbolics.

In any case, he said in 1982 that could not be passed this way of life, disassembling and re-implemented programs at night and studying for his doctorate in physics in the morning. He gave himself a deadline: one year. Finally, 1983 arrived and with it the moment to rethink the life and future. It was then announced the GNU project, a completely free system where the community work for the community, where red tape is not filed and the rights of an individual, however powerful it might be, prevail over the common good. Many achievements are to be told, both RMS and the GNU project. But that's another story to be told another time ...

Saturday, January 9, 2010

How Big Is An 8 Cm. Cyst

The Ferrari of computer

In the last entry in this blog, when talking about the CELL processor referring to it as a supercomputer on a chip. This comparison is not free, and does not refer to how quickly the microprocessor will (get it "faster" as most megahertz), but the chip architecture itself, as the CELL processor has a characteristic of supercomputers to be a vector processor.

is when I began to remember old stories of ancient heroes and forgotten and I thought of a hardware designer whose name has always been synonymous with raw power. I refer, of course, Seymour Cray.

tell the story of Seymour would take to many pills and even an entire book (in fact, already have it, although not exclusively dedicated to him), but I am going to focus on a specific project that will help up again as the designer of the world's fastest computer, the Cray-1.

To speak of Cray-1 would first have to talk a little background. In the 60's, IBM was the Almighty and all of the computer world, and everyone else, however great they were, they only danced to Big Blue played. However, one company decided to dispute the leadership at IBM in the field of scientific computers, ie computers specializing in the treatment of complex mathematical operations on a relatively limited set of data.

Thus, the CDC 1604 appeared first and then the jewel in the crown, the CDC 6600, considered the first supercomputer in history and which was at least an order of magnitude faster to any other computer at the time. Famous is the comment of Thomas J. Watson Jr., at that time president of IBM, saying that it was possible for a small company of 34 people could beat them when they were thousands of people. And the answer is also famous chief engineer who designed the CDC 6600, saying that he had answered his own question. The name of this engineer is Seymour Cray.

As counter the CDC 6600, IBM announced a new model of family System/360 mainframes, namely the Model 92, which promised to be at least as fast as the CDC 6600, but with all that meant to be an IBM computer. This caused many customers will think before buying a CDC 6600 and wait to see what drew him to compete with IBM. The weather started to happen but that case Model 92 never hit the market, so it sued IBM Data Control for a clear case of what today is known as vaporware .

Against all odds, CDC beat IBM in court, but perhaps this success they went a little to the head and the company began to change course to stop being a company that manufactures and sells computers to be passed one that manufactures and sells solutions, ie Thereafter design, and sell new computers fabtricar was only part of the business that was complemented with printers, terminals, input and output systems, software, etc.

Unfortunately for CDC, this meant that the company should split its always limited resources among more and more projects, and to top it in 1969 not only was funding the development of the CDC 8600, which would be a multiprocessor computer (remember we're talking about 1969, there were no such things outside the realm of theory), but also the design of a long-time collaborator and assistant to Cray, the STAR-100.

So in the year mentioned, 1969, CDC management met with Seymour Cray for this would cut costs by 10% (ie, to fire 10% of its workforce). Instead, what he did was he reduced the salary to collect the minimum required by law ($ 1.25 per hour) and save the project.

But his sacrifice was in vain. The CDC 8600 was a machine so complex that the problems seemed to pile up, and perhaps one of the largest was chilling. Cray needed more funds to start the project again, as was done with the CDC 6600 and gave such good results. However, the direction of CDC saw this move as too risky not authorized, which meant that Seymour Cray decided to leave the Control Data Corporation and found his own company, Cray Research end up calling .

Once they had sufficient funds (some of which were obtained with lectures and exhibitions Tupperware-style home is Seymour founded the company and began to work again in the construction of a new supercomputer.

In Instead of following the line of development of multiprocessor CDC 8600, Seymour decided to stick with the traditional idea of \u200b\u200ba single CPU for its first project in his new venture. So instead of having multiple processors working independently each other, decided to construct a vector processor, which means, simplifying a little, you have a set or vector (and hence its name) math coprocessors, which could be loaded with different data and all run the same instruction at a time. While this technology is not much in the world of mainframes "office" where you work with lots of data that are given a very limited set of operations for math-intensive applications and scientific applications (or image processing and video-or three-dimensional video games), this technology allowed greatly accelerate performance.

However, the idea of \u200b\u200bvector processor was not really his, but was inspired by her competitor in CDC. Indeed, the CDC STAR-100 was designed as a vector computer. Notwithstanding the differences between the two computers were palpable. Since we do not pretend to give lessons in architecture of processors (not think I have enough knowledge to do it), say that although the idea is copied by Seymour, that develops its own style using their own techniques and conclusions so that the final machine had substantial differences at the structural and performance.

Finally, in 1975, went on sale the Cray-1. In his own original had a 64 bits at 80 MHz and was capable of directing the equivalent of 8 megabytes of RAM. However, at a glance what most stood out was the design of it, he was in a horseshoe shape giving it an extremely advanced science fiction. The cooling system, true workhorse of supercomputers, was based on the freon gas, something completely new at that time. With all this, the first Cray-1 model weighed the negligible amount of 5.5 tons.

Initially, Cray Research had to sell a dozen supercomputers (think very specific market that have these machines), but requests for several years of acquisition of Cray-1 was piled reaching more than 80 computers sold, at a price of $ 8.8 million in the second half of the 70.

This success catapulted to fame in the Seymour Cray, who was already working on the Cray-2 and did not have much time for "social" or promotional Cray-1. But yes I would tell a story to understand how much respect was Seymour Cray. Despite

little as it was when appearing in public, Cray attended as a speaker at a lecture in 1976 to the developers of the National Center for Atmospheric Research in Colorado, USA. Finally, when it came question time, the room fell to complete silence. Seymour waited several minutes waiting for someone to do a question, but nobody said anything. Finally, when he left, the head of the computer division of the place asked the audience pretty angry with how difficult it had been getting Cray was up there how is that no one had raised his hand to ask. After a tense moment, one participant replied: "I would talk to God?". Regardless of the future evolution of events, successes and failures of Seymour Cray, this story reflects very well the genius of an engineer whose dream was always to build the fastest computer in the world, the Ferrari of computer ...