Moved

I moved to seanja.com because I wanted more control and some space to play with so, update your links! (it only cost me 10$ for the whole year)

Advertisements

The don’t mess with it files 1

I was playing around with the settings in the CompizConfig Settings Manager today to see what was there. Who could resist all of the buttons? (Hint: Not me) Apparently I mucked something up pretty bad, because my cursor was disappearing randomly. It was still there, I could see it hovering over links and buttons, but it was invisible. Kind of annoying. So I decided that it was Compiz’ fault, and messed around with more settings (I think I changed this one before, maybe it should be this! No… the cursor is still disappearing crap…). Finally I decided to reset all of the settings to default and rebooted the computer (old habits die hard). That worked. So I started messing with the Compiz settings again. So far I still have a cursor. Maybe I should quit while I am ahead… (I think I may have just lost the mouse again)…

Anyway, here is some eyecandy for you mac fanboys/girls…

ADD HelperNegative

coverflowNegative Window only

Taskbar window previewShow desktop

Multi desktop view

Tell me that is not funny

You can do this one in every 30 times and still have 97% positive feedback.

Multiprocessor Systems

Multiprocessor systems are not a new concept, nor are they a new design, they have been around for many years. StarOS for example, was being used at Carnegie-Mellon University on their 50 processor Cm* multi-microprocessor computer since 1979.1 The Sega Saturn had 2 32 bit processors, a new and very challenging concept to the game developers of the time, who essentially stopped developing anything for the Sega Saturn and went to develop for the Playstation instead. The same problem arose with the Atari Jaguar, which had 3 different chips with 5 processors.2 It was too complicated to program for the multiprocessor consoles.3 The fact that developers were avoiding the multiprocessor consoles was probably a factor in that it took around of 26 years for the multiprocessor computer to become mainstream.

It has taken some time, but now that the systems are mainstream it is interesting to look at how the back end of the system actually works, or, more accurately, how the operating system handles the fact that it now has 2 or more processors to deal with as opposed to the (up to this point) traditional 1. Since it is now the norm almost every operating system out there will automatically support at least 2 processors, Mac OS 10.4 currently supports 8 separate processors.

There are three different ways for a computer to handle multiprocessing. One of the most basic ways for a machine to use multiple processors is to allow each processor to have its own operating system. This also means that each CPU has its own private memory. In other words, if you have * CPUs you will be running * operating systems. Since each of the CPUs has its own memory and operating system, there is no way for a CPU that is not using all of its CPU cycles to share those cycles with other CPUs. This makes the system very inefficient for running one operating system. It does however allow for simple virtualization of many different operating systems on one machine. A major problem with this setup is that if there is a disk write from one operating system, this write is not necessarily conveyed to the other operating systems that are running. If this happens, the disk can become dirty with parts of incomplete files all across it, which leads to errors and unpredictable results. Another problem with this method is that, since each CPU handles one operating system, the other CPUs can be left idling while one CPU does all of the work.4

A second way of handling multiprocessing is to have a Master-Slave setup (also known as Primary-Secondary). This is where one processor (the master) controls one or more processors (the slaves). This means that there is a processor that tells all of the other processors that are beneath it what to run and when to run it. Meanwhile the Master processor runs only the operating system. The Master-Slave method does fix most of the problems that arise when using the * CPUs * operating systems method. The slave CPUs all share memory, and the slaves constantly ask for new jobs as they complete their old ones, not allowing themselves to become idle. This methodology is not without its problems. For example, it is susceptible to a bottleneck. If the master processor is not fast enough to hand out jobs, then the other processors can become idle and will therefore waste precious CPU cycles. This means that this model is good for smaller systems, where the Master will not be bogged down by the Slaves requests.5

The third way for an operating system to use multiprocessing is to have Symmetric Multiprocessors. Unlike the Master-Slave method, the CPUs are all equal here. They all share a copy of the operating system, and can all run the user processes. The problem with this method is that if multiple CPUs are trying to run some operating system code, then they can overwrite each other which could cause the system to fail. A way around this is to allow only one processor to run operating system code at one time. Essentially making operating system code mutually exclusive(mutex).6 This way can become as slow as the Master-Slave method with a large number of processors. A better way is to break the operating system up into logical parts that do not interact with each other. This makes different parts of the operating system available to each CPU. However, those parts are protected by mutex so that one part of the operating system is not running on more than one processor to avoid system errors. Most processors that are on the market today use this method. The hardest parts of programming the operating system to use this method is for the programmers to use mutex properly, and to realize what the separations should be. What makes this more difficult is the fact that there is a possibility that this method is no longer used in the future and programmers will have to learn a whole new method and incorporate it into their programs.7

There are also different ways to connect the CPUs together. If they are all on one bus (or more generally in one computer). This is called a tightly coupled system. This is the standard setup for most desktop computers that you can buy on the market today. Since this is the case, programmers are quickly becoming versed in writing software that takes advantage of the multi-core technology. This is similar to the push that 64bit processors are having on the software industry. Programs are having to be rewritten so that they are compatible with the new capabilities of the processors. If they are not, they could end up being slow due to them running on a compatibility layer that supports the older 32 bit instructions (or the single processor threads), or they could be incompatible all together and not run at all on the system.8 As previously stated, a single processor makes writing programs much easier because there is only one thread running by itself and producing results. In a multiprocessor system it is obviously no longer the case. There are now multiple threads and commands running at the same time. This means that there is a possibility that, if there is nothing keeping track of what has been done and what has not been done, commands could be run twice and results could be over written. This problem is solved by synchronizing the threads so that they do not end up conflicting with each other. It may seem like a simple task, but it is not a simple task.9

The second way of connecting processors is called loose coupling. This is essentially a number of computers (either multiprocessors or uniprocessors) that are connected together, but not necessarily on the same computer or in the same room, or even the same continent. These processors use their idle time (in some cases this is all of their time) to preform tasks such as mining data for patterns. One example of this in action is BOINC (Berkley Open Infrastructure for Network Computing), which runs many different “projects” on many different computers all over the world. One of the most famous of the projects is SETI at home, which uses the power of the computers connected to it to search for alien signals in the radio waves that are picked up by radio telescopes on the Earth.10

The future is clearly with these multiprocessor systems, they are faster and they save power.11 Since we have almost reached the speed limit of silicon, the only way to make systems faster is to throw more and more chips at the system. Well, we could also freeze the chips to near absolute zero as well to increase the speed limit of silicon up to 250 times that which it is right now (350 gigahertz vs 2 to 3).12 The second solution is not feasible as most people would not want to pay the money to buy a cryogenic freezer, nor would they want to foot the bill for the power consumption. That leaves the former solution of piling up processors together to get more speed out of the system (be it a farm or a personal computer). This also means that the size of the processors will have to be proportional to the number of them. As the number of processors increases, the size of them will decrease. The only way for this to be feasible is for the prices of the processors to drop continuously, which means that the manufacturing of the processors has to become cheaper and cheaper as the demand for faster and faster machines continues to rise.

1http://portal.acm.org/citation.cfm?id=806579&dl=ACM&coll=GUIDE

2http://darkwatcher.psxfanatics.com/console/microprocessor.htm

3http://darkwatcher.psxfanatics.com/console/jaguar.htm

4http://www.informit.com/articles/article.asp?p=26027&seqNum=3&rl=1

5http://www.informit.com/articles/article.asp?p=26027&seqNum=3&rl=1

6http://en.wikipedia.org/wiki/Mutex

7http://www.informit.com/articles/article.asp?p=26027&seqNum=3&rl=1

8http://en.wikipedia.org/wiki/64-bit

9http://www.phptr.com/articles/article.asp?p=26027&seqNum=4&rl=1

10http://setiathome.berkeley.edu/sah_about.php

11http://www.tomshardware.com/2005/11/07/single/page13.html

12http://www.popularmechanics.com/blogs/science_news/3194901.html

Update your software!

Since Linux is so good at checking to see whether or not something is out of date, you would think that there would be a program for Windows that does the same… None that I know of, but there is a website that will do a quick scan of the software you have and then it will tell you whether or not it is out of date and it will also tell you how to update the programs that are out of date.

Apple have released V7.2 of QuickTime that patches eight serious flaws in the product the worst of which could allow your computer to compromised simply by watching a specially crafted QuickTime movie. If you QuickTime version number is less than 7.2 then please update now. Adobe has also released a patch for its highly popular Macromedia Flash plug-in. This fixes flaws in Flash that, like the QuickTime flaw, could allow your computer to be compromised simply by watching a malicious Flash movie. According to the Adobe bulletin the flaw affects “9.0.45.0 and earlier, 8.0.34.0 and earlier, and 7.0.69.0 and earlier.” These flaws are serious folks; update immediately.

http://secunia.com/software_inspector/?task=load

Very Cool things coming from Microsoft

This video demonstrating Microsoft’s Photosynth project left me speechless and made me think the unthinkable; that computers are finally delivering on what they promised. It’s good news even if it is a mere 30 years late. Thanks to Lex Davidson for this link. Note that you need broadband to view this.

Check it out:

http://www.ted.com/index.php/talks/view/id/129

Random Haiku

What the f**k, future?
It’s been over twenty years:
where’s my hover car?!

-drivl.com