• RSS ZoeAndGavin.com

    • An error has occurred; the feed is probably down. Try again later.
  • Top Posts

  • Advertisements

Multiprocessor Systems

Multiprocessor systems are not a new concept, nor are they a new design, they have been around for many years. StarOS for example, was being used at Carnegie-Mellon University on their 50 processor Cm* multi-microprocessor computer since 1979.1 The Sega Saturn had 2 32 bit processors, a new and very challenging concept to the game developers of the time, who essentially stopped developing anything for the Sega Saturn and went to develop for the Playstation instead. The same problem arose with the Atari Jaguar, which had 3 different chips with 5 processors.2 It was too complicated to program for the multiprocessor consoles.3 The fact that developers were avoiding the multiprocessor consoles was probably a factor in that it took around of 26 years for the multiprocessor computer to become mainstream.

It has taken some time, but now that the systems are mainstream it is interesting to look at how the back end of the system actually works, or, more accurately, how the operating system handles the fact that it now has 2 or more processors to deal with as opposed to the (up to this point) traditional 1. Since it is now the norm almost every operating system out there will automatically support at least 2 processors, Mac OS 10.4 currently supports 8 separate processors.

There are three different ways for a computer to handle multiprocessing. One of the most basic ways for a machine to use multiple processors is to allow each processor to have its own operating system. This also means that each CPU has its own private memory. In other words, if you have * CPUs you will be running * operating systems. Since each of the CPUs has its own memory and operating system, there is no way for a CPU that is not using all of its CPU cycles to share those cycles with other CPUs. This makes the system very inefficient for running one operating system. It does however allow for simple virtualization of many different operating systems on one machine. A major problem with this setup is that if there is a disk write from one operating system, this write is not necessarily conveyed to the other operating systems that are running. If this happens, the disk can become dirty with parts of incomplete files all across it, which leads to errors and unpredictable results. Another problem with this method is that, since each CPU handles one operating system, the other CPUs can be left idling while one CPU does all of the work.4

A second way of handling multiprocessing is to have a Master-Slave setup (also known as Primary-Secondary). This is where one processor (the master) controls one or more processors (the slaves). This means that there is a processor that tells all of the other processors that are beneath it what to run and when to run it. Meanwhile the Master processor runs only the operating system. The Master-Slave method does fix most of the problems that arise when using the * CPUs * operating systems method. The slave CPUs all share memory, and the slaves constantly ask for new jobs as they complete their old ones, not allowing themselves to become idle. This methodology is not without its problems. For example, it is susceptible to a bottleneck. If the master processor is not fast enough to hand out jobs, then the other processors can become idle and will therefore waste precious CPU cycles. This means that this model is good for smaller systems, where the Master will not be bogged down by the Slaves requests.5

The third way for an operating system to use multiprocessing is to have Symmetric Multiprocessors. Unlike the Master-Slave method, the CPUs are all equal here. They all share a copy of the operating system, and can all run the user processes. The problem with this method is that if multiple CPUs are trying to run some operating system code, then they can overwrite each other which could cause the system to fail. A way around this is to allow only one processor to run operating system code at one time. Essentially making operating system code mutually exclusive(mutex).6 This way can become as slow as the Master-Slave method with a large number of processors. A better way is to break the operating system up into logical parts that do not interact with each other. This makes different parts of the operating system available to each CPU. However, those parts are protected by mutex so that one part of the operating system is not running on more than one processor to avoid system errors. Most processors that are on the market today use this method. The hardest parts of programming the operating system to use this method is for the programmers to use mutex properly, and to realize what the separations should be. What makes this more difficult is the fact that there is a possibility that this method is no longer used in the future and programmers will have to learn a whole new method and incorporate it into their programs.7

There are also different ways to connect the CPUs together. If they are all on one bus (or more generally in one computer). This is called a tightly coupled system. This is the standard setup for most desktop computers that you can buy on the market today. Since this is the case, programmers are quickly becoming versed in writing software that takes advantage of the multi-core technology. This is similar to the push that 64bit processors are having on the software industry. Programs are having to be rewritten so that they are compatible with the new capabilities of the processors. If they are not, they could end up being slow due to them running on a compatibility layer that supports the older 32 bit instructions (or the single processor threads), or they could be incompatible all together and not run at all on the system.8 As previously stated, a single processor makes writing programs much easier because there is only one thread running by itself and producing results. In a multiprocessor system it is obviously no longer the case. There are now multiple threads and commands running at the same time. This means that there is a possibility that, if there is nothing keeping track of what has been done and what has not been done, commands could be run twice and results could be over written. This problem is solved by synchronizing the threads so that they do not end up conflicting with each other. It may seem like a simple task, but it is not a simple task.9

The second way of connecting processors is called loose coupling. This is essentially a number of computers (either multiprocessors or uniprocessors) that are connected together, but not necessarily on the same computer or in the same room, or even the same continent. These processors use their idle time (in some cases this is all of their time) to preform tasks such as mining data for patterns. One example of this in action is BOINC (Berkley Open Infrastructure for Network Computing), which runs many different “projects” on many different computers all over the world. One of the most famous of the projects is SETI at home, which uses the power of the computers connected to it to search for alien signals in the radio waves that are picked up by radio telescopes on the Earth.10

The future is clearly with these multiprocessor systems, they are faster and they save power.11 Since we have almost reached the speed limit of silicon, the only way to make systems faster is to throw more and more chips at the system. Well, we could also freeze the chips to near absolute zero as well to increase the speed limit of silicon up to 250 times that which it is right now (350 gigahertz vs 2 to 3).12 The second solution is not feasible as most people would not want to pay the money to buy a cryogenic freezer, nor would they want to foot the bill for the power consumption. That leaves the former solution of piling up processors together to get more speed out of the system (be it a farm or a personal computer). This also means that the size of the processors will have to be proportional to the number of them. As the number of processors increases, the size of them will decrease. The only way for this to be feasible is for the prices of the processors to drop continuously, which means that the manufacturing of the processors has to become cheaper and cheaper as the demand for faster and faster machines continues to rise.














Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: