Skip navigation

Courtesy of /. and the BBC, and fresh from the No-Duh desk, we have news that Apple is recommending Mac users use an Anti-Virus program.   Gizmodo reported that they have been recommending this since last year.   The irony is thick when you remember their switcher ad where they said viruses weren’t a problem on a mac.   But the real question is, why is anyone surprised?   Neither Macs nor *nix are immune to viruses, trojan horses, or the like.  They never have been.

Sure, a lot of people have this idea that those systems are either immune or almost immune to viruses.   But, ironically, most people involved in computer security would point out that there’s no reason to believe that.   In fact I think it’s worthwhile to point out that even the current scarcity of viruses and other hostile software for these systems do not prove that these systems are necessarily any more secure than Windows.   It is true that they might be more secure than Windows, but that doesn’t invalidate the need for Anti-Virus software.

So, like I said.  Macs need anti-virus?  Duh. Well–maybe more like should have…


One of the nice things about the open source operating system, GNU/Linux, is the breadth of choice available to users, most at  absolutely no cost.   This allows a user to choose the distribution which matches his tastes best.   But, there is one flaw in this gluttony of chocie.   How’s a beginner to choose a distro?   Okay, let’s say you limit the choices to all of the “major” distros, like Fedora, OpenSUSE, Ubuntu, et cetera.   Even then, there’s no easy way for a newbie to pick.   I feel that if we can change this situation, we would be enabling new users to more easily adopt linux as an operating system, as a result spreading free and open source software.

The question then arises, “Which distribution should be the ‘go-to’ distribution for new linux users?”   Well if you read the title for this, you’ll have guessed already…the distribution should be Ubuntu.   Now in all fairness I do use and like Ubuntu, but it isn’t the distro I use most often.   OpenSUSE and Fedora are battling for that prize.   Rather, Ubuntu was the first linux distro that I used.

With that in mind, here are three good reasons why all linux users should support Ubuntu as the linux distro for new linux users.

1:   Ubuntu’s stated goal has always been to make a linux for ordinary people, and it has usually succeeded in making their distro easy for novices to pick up.    For that reason, Ubuntu is already a good distro for new users.

2:   While a generic Wubi is being created to work with any linux distribution, Ubuntu is, now, still the only distro which features the ability to install itself easily onto a windows system and, just as easily, remove itself.   This reduces the upfront cost of time and knowledge necessary to install linux on your computer, so new users will be more likely to try using linux and will encounter fewer road blocks to that goal.

3:   While choice is wonderful, having one distro which every linux user can point to as the distro for people new to linux makes it easier for advocates.   An advocate won’t have to bring up different distros, or explain any complex ideas.  They can simply give them a cd, tell them to choose “install in windows”, and the rest will be self-explanatory.  Hiding the details from new, usually non-technical, users makes the whole experience better for the user, front-to-back, and makes it more likely that they will stick will linux.

That being said, let me know what you think.   Have I gone crazy, or does this seem to be a net benefit for FOSS?

I’m a programmer, though I’m really only an amateur right now.   I’ve written programs in C++, C, and Pascal.   My first language was Pascal, Turbo Pascal specifically.    I love the act of programming, but when I attempt to explain what programming is like, I often find myself at a loss for words.   What does a person do when they code?   My best metaphor has always been that coding is like writing or creating music.   It’s an act of creation…an art.

What does that mean?   Art is often thought of as creation ex nihlo–creating out of whole cloth.   But that’s not true.   A writer uses a known language, with a known grammar.   When she writes, she writes with an eye towards her genre.   She might borrow from the generic conventions or go against them.   But few good writers ignore them.   A musician will tend to pick a certain key and a certain scale.   He doesn’t have to, but the alternative, composing in the chromatic scale (using all possible notes), is often less pleasing and more difficult to compose in.  He will also compose with the conventions of his genre in mind.  He could use, ignore, or even self-consciously twist the conventions in an attempt to make the statement he wants to with his music.   In each case the artist is remixing, for a lack of a better word, the conventions and limitations to express his or her own statement.   They are using a library of pre-built words, expressions, biases and beliefs to achieve their goal.   My question is, understanding all of that, how could you view coding as anything else but art?

When you code, you choose your limitations.  Your language decides what you are capable of expressing.   Coding in C++ is always different, and always causes a different result, from coding in Lisp, or even Assembly.    The language, like a scale, limits your options and, by doing so, enables the coder to accomplish certain things more easily.   It is interesting to consider that when you finally start to program, you will usually be programming for an application which has been implemented before.   Yet, your code, and your final product, will inevitably be different from them.   Or, to put it another way, it isn’t just a coincidence that programmers will write the same program or even the same functions in different ways, even if they use the same language.   When I code a program, I’m expressing my own personal beliefs and biases about how that program should work.  It might be better or worse than someone else’s implementation.   That doesn’t matter because unless I am attempting to mimic someone else’s coding style, I will always code how I believe a thing should be coded.   A more “zen like” way to think of the problem is this.  I can only code as I would code, or code as I think others would code.    I can never code as a different person codes because doing so would require me to be that person.

Thus coding is not only a form of art, but a form of personal expression.

I know that seems funny.  But even in the most staid task, the coder cannot escape the fact that HE is always coding and that the code will either reflect his beliefs or what he believes his boss’ beliefs are about the best way to implement the program.   In each case, (excluding the case where the programmer is essentially copying someone else’s code) the programmer is the filter through which the code is passed and the programmer is the “designer” or “creator” of the code.

Cloud computing. It’s a term that has become so pervasive that it’s easy to imagine it as the next logical, progressive step in computing. I, however, find myself agreeing with Richard Stallman more and more. Cloud computing is, perhaps, the least needed, least thought out and, potentially, most dangerous “improvement” in modern computing history. I’m also aware that I am in the minority among tech-savvy users when it comes to this position. With that in mind, I must acknowledge the potential benefits. Moving applications and data storage onto servers has it’s advantages. The operating system is no longer a barrier. A person wouldn’t have to choose their software based, in large part, upon their operating system. Data storage becomes more convenient as online storage solutions such as Amazon’s S3 service enable ordinary users to essentially operate their own, mostly hassle-free, web servers. Even seemingly innocuous services like web e-mail and hosted blogging services illustrate the ability for “cloud computing” to makes previously complicated services simple. Anyone can run their own internet-connected file server. But only a few have the technical knowledge or desire to successfully do so. So why don’t I like cloud computing?

There are three main reasons why I’m luke warm on cloud computing. Cloud computing requires the user to depend upon a machine, run by someone else, whose only connection to him is through the internet. Cloud computing means sending to and storing data on a server that you don’t control, and whose security measures you cannot be sure of. Cloud computing also has the potential of being less secure than traditional desktop computing.


Reliability is an essential aspect of any system. A computer is only useful as long as it continues to function. A website is only useful as long as it continues to run and have bandwidth and resources available. When you consider cloud computing according to these simple requirements, it ought to seem obvious that computing in the cloud will be less reliable, everything else being equal. An application depends upon a single workstation having enough resources available and, at a lower level, functioning hardware to run. Cloud computing requires the same thing from a server, which is most likely being accessed by multiple users, and a functioning internet connection with sufficient bandwidth. Increasing the point of failure will inevitably lead to an increase in potential that such services will be degraded or even fail.

But what of hardware failures? Isn’t it true that a business will use better hardware and employ people who are more knowledgeable than the average consumer? Won’t these considerations tilt the scales toward the cloud? Simply? No. Hardware performance and quality have been increasing while the cost has been decreasing. This inverse relationship allows the average consumer’s desktop to be more capable than it ever was before. It is true, however, that in the rare case of hardware failure a cloud application will have built-in redundancy and people capable of fixing such problems. This prevents most breaks in service. Yet, this fails to be a strong argument. Most consumers have multiple computers, a fact that will become increasingly more widespread as the cost of hardware goes down. That, coupled with a prudent back-up plan, would allow most consumers to avoid serious disruptions.


I was originally planning to focus exclusively on the privacy implications of moving and manipulating data on a “foreign” server. The truth is privacy seems to be important only to a select few people, myself included, and the consequences of cloud computing really extend to the concerns over control. Cloud computing leads to a fundamental loss of control. Our data is stored on someone else’s servers, in someone else’s building. By doing work through a cloud application, the user is fundamentally placing undeserved trust in the honesty of the application owner and it’s employees. In all cases the user is put into a situation where he lacks actual physical control over his data.

Why does this matter? Data is malleable. It can be easily changed. Data is also easily copied. In this situation, someone else can more easily copy and/or modify your data and monitor you. It also opens up the further possibility, suggested by recent events, that you could even be locked out of access to your data and applications. Some of these risks can be mitigated through the use of encryption. Encryption is no panacea though. Most people choose weak keys/passwords. This makes the encryption much weaker. Also many application providers offer secondary means of access, often in the form of “security questions”, which are usually even weaker than the key or password in use.


Cloud computing, currently, is potentially less secure than traditional desktop computing. Web applications are typically available at any time of the day, and any day of the week. That means it is available to attempted exploits at any time. The database and/or data behind the application are continually available to any one who is able to gain access. Contrast that with a desktop application. A person who gains access to that desktop will have access to the data on that specific machine. If the network which the computer is on happens to be unsecured, then the person could gain access to them at well. At it’s worst, the damage is limited to specific instances—specific machines. In other words, it is easier to limit the damage caused by penetration through a flaw in a desktop application than it is with a cloud-based application. A cloud application is, then, a bigger target than any individual user would be. That combined with the current current vulnerability which modern web applications have shown to attacks by malicious users, ought to inspire caution.

A fine young atheist.   I have to join P.Z. Meyers in giving gogreen18 a godless clenched fist salute for this lucid, passionate explanation of the reasons why atheists need to speak out.

Thanks to P.Z. and Pharyngula.

The New York Times has an interesting article about Hindu violence against Christians in India.   The story’s interesting for the fact that it seems as though most Westerner’s think that it’s only the big three monotheistic religions which suffer from violence.   However, it seems as if India is suffering from internal religious strife.

There are, apparently, reports of Hindu forcing their neighbors to convert to Hinduismor face the threat of exile or even death.   As usual, the persecuted are the minority of the country.   Christians make up about two percent of the majority Hindu population.   The violence is nominally caused by the death of a Hindu preacher who had preached against Christianity.   Authorities seem to believe that Maoist guerillas were responsible, but the Hindu(radicals according to Times) have, nevertheless, blamed the Christians.

The only question ought to be.   Is any of this really surprising?

Not really.

Science Is…

Science is…

Science is hard.

Science is a profession.

Science is a way to understand the world.

Science is being wrong, and still succeeding.

Science isn’t a religion.

Science is the feeling you got seeing “The Pillars of Creation”.

Science isn’t a panacea.

Science is a means, not an end.

Science is the love of wisdom.

Science is a candle in the dark(Thanks Sagan).

Science is a hope for the future.

Science isn’t arrogrant.

Science is wonder.


Science is…

Well, it is looking like the newly pork filled financial bail out bill will pass.   In two procedural votes, more than the 218 votes necessary have voted for the bill so far.   It looks like all of the talk about high principles turned out to be nothing but so much hot air.   All that was needed was some pork barrel spending.   Admittedly they haven’t passed the bill yet…but it doesn’t look good.   I guess at this point, at that is left is to wait and see.

Philosophy has a long history of paradoxes, going back to the beginnings of philosophy in ancient Greece. Paradoxes are often concerned with a certain theory or system: Time travel paradoxes and physics would be a well known example. I like paradoxes. They push you to consider a problem from multiple directions. But what I like the most about them is how they require the listener to understand, and take apart, the language of the problem.

A rather popular paradox, sometimes described as a problem of free will and determinism, is known as Newcomb’s Paradox. I first encountered this paradox in Martin Gardner’s book of math puzzles, “The Collossal Book of Mathematics”. Newcomb’s paradox, named after it’s creator physicist William A. Newcomb, consists of a game between two agents. It is concerned with a branch of mathematics/philosophy known as decision theory. The two actors are the “Predictor” and the “Gambler”(My terminology). The situation is set up as such. There are two boxes, box B1 and box B2. Box B1 always contains $1,000, but B2 can contain either nothing or $1,000,000.

The Gambler, you in this situation, can either

1: Take what is in both boxes.

2: Take only what in in B2.

A certain amount of time beforehand, the Predictor guesses whether the Gambler will choose option 1 or option 2. If he guesses that the Gambler will choose option 1, he will leave B2 empty. On the other hand, if he guesses that the Gambler will choose option 2, he will put $1,000,000 in B2. The Predictor is so good at guessing that he is almost certain to be correct. In some cases the Predictor is described as being almost godlike in his accuracy. It isn’t necessary to assume any sort of determinism by the Predictor.

The paradox comes about by the fact that two, mutually exclusive, strategies appear to be correct. Assuming that the Predictor will almost certainly be accurate in his predictions, it would pay to always choose option 2. If you choose option 2, you will almost always earn one million dollars. A sure bet, if you will. In contrast, if you choose option 1, you will almost always earn only one thousand dollars and, only very very rarely, earn 1.1 million dollars. But consider, for a moment, that the Predictor made it’s guess a while ago. The cash is already in the boxes. So wouldn’t it actually be more logical to select option 1? If you assume that there is no “backwards causality”, that what you do doesn’t change, after the fact, the contents of the boxes then either the boxes contain the money or it doesn’t. So no matter what you actually choose, you can’t affect the odds. As such, it is better to select option 1, understanding that you are maximizing the cash you will earn by taking both boxes. This is because whatever IS already within the boxes will be there no matter what you choose.

Hopefully I’ve done a good job making both of the strategies sound compelling. Now when I first read the paradox, I started by trying to understand the key figure in this paradox, the Predictor. The paradox revolves around, in my mind, how the Predictor makes his predictions. The truth is that we aren’t told how he determines his choice. Thus we, the Gambler, are unable to get an idea of what his choice might be. If we assume no knowledge, we can only base our choices upon what we decide to do. Whatever we choose will most likely be what the Predictor will have chosen. This sounds weird but since the Predictor is good at guessing what we will do, we must assume that he will guess whatever we actually do because there doesn’t appear to be any better strategy. In this case the best strategy is to choose option 2. If, however, we understand him to be basing his choice, say, upon our psychological preference for one option or the other, we can see another potential strategy. Assuming the above information by the Predictor, we could attempt a strategy where we choose the opposite of our instincts. Either way, we can now see the paradox would seem to be the result of the definition of the Predictor.

There is obviously more to discuss about this paradox. What strikes me most strongly about this puzzle is that it seems to be a interesting point about determinism. If we assume that the Predictor does know everything you do in advance then he is essentially affecting the future by playing the game. We have defined the Predictor as making whatever choice you choose before you choose it. Yet, the earlier point that we cannot make money appear or disappear by our actions remains. To quote Martin Gardner’s excellent response on this topic.

“It is not logically inconsistent to suppose that the future is totally determined…but as soon as we permit a superbeing to make predictions that interact with the event being predicted, we encounter contradictions that render the existence of such a superpredictor impossible.”

With the impending bailout of the financial sector, it’s hard not to consider the implication of the U.S’ current economic policy.   I want to mention, specifically, the policy of bailing out large, failing, businesses.   The phrase bandied about so often is that a business is simply “too big to fail”.   That can mean a lot of things, not all of them meaningful.   Following close on the heels of the events of September 11th, 2001, the major U.S. airlines found themselves in a dire financial situation.  In reponse, the government poneyed up fifteen billion dollars of tax payer money to prevent their collapse.   This wasn’t the first time the government has had to bail out a large industry in the U.S.

If you consider the government’s bailouts of other luminaries such as Lockheed Martin, Chrysler, and even the City of New York, a pattern starts to emerge.   One of corporate welfare paid by the tax payers to failing businesses   In each case, the business, or businesses, were failing and, in each case, the government deemed the companies to be too “big”, to be too important, to fail.   That is a problem, and all we need is to consider how a capitalist free market is supposed to work to understand why.

The fate of a business is closely tied to how it is run.   If I run my business into the ground through some combination of bad business practices and/or poor financial foresight, then I have only myself to blame.   Nor is it right to blame natural disasters, necessarily, for the failures of large businesses.  A small business might not be able to pull in enough capital to reopen.   But a large business which fails due to a natural disaster failed because they did not plan for the future properly.  Natural disasters, whether natural or man-made, are a regular occurance.   Even if you disagree with my contention, I would still argue that there are very few good reasons to help out a failing company–most of which are a combination of philosophical and pragmatic concerns.

A company fails due to poor business decisions.   This is not necessarily a good thing.  However to bail out the company would usually be a far worse action.   Each time you bail out a company, you set a precedence for doing so again.   The more businesses feel that they are immune from failure, the more willing they are to pursue reckless or near sighted strategies.   This collective unwillingness to allow certain businesses to fail must be partly blamed for our current situation.

Beyond such pragmatic concerns, there is something rather disconcerting about the idea that we should spend tax money to prop up failing businesses.   Business so often demand as little regulation as they can possibly get away with, and for good reason.   It is easier to succeed if you don’t have someone telling you what you can and can’t do.   Yet, few businesses would reject a bailout opportunity from that very same government.  It’s selfish.  But that’s not bad.  Acting in one’s self interest is that idea which most of our markets are built upon.   Instead, just as strong markets are built upon strong competition, innovation relies upon the free competition between companies.  What we get when large companies are backed up by federal dollars is the exact opposite of competition or innovation.   If an uncompetitive business can depend upon it being propped up, it has an unfair advantage over it’s competitors.   Yet not only are these payouts fundamentally unfair, they are also responsible for prolonging inefficient and uncompetitive businesses at the expense of their competition.

Thomas Friedman in an interview with Scientific American suggested that no free markets actually exist anymore, and I for one would tend to agree.  He stated the position as an argument for regulation.   I would, perhaps, use it in a slightly different manner.   Conservatives in the U.S. are fond of preaching the religion of free market economics.   The truth is however more complicated.   A truely free market is dangerous.  Few consumer safety and fraud protections are built in.   The Gilded Age of the United States provides a handy reference for such a market.   A completely controlled market isn’t the answer either, as the woes of the former Soviet Union, following the collapse of the berlin wall illustrated.   So what is left?   That answer is far from easy to give.   However, if we are to have a market that can be called anywhere near free, we must allow for business to succeed or fail based upon their actions.   If we are unwilling to stomach the rough seas of the free market, then we must not persist in continuing with these half measures and combine tougher regulation with these government bailouts.

Which brings me to the thought which prompted this whole thing.   Considering all of the corporations which were “too big” to fail, what businesses out there might also qualify for such dubious protections.  I came up one very good example.   Microsoft.   Microsoft is a corporate behemoth.   Their operating system encompasses over 90% of the PC market.   It is also a major player in smart phones.   Beyond that, they have their hands in sectors as diverse as video games, digital music players, and web advertising.   Not that Microsoft would seem to be anywhere near bankruptcy, but it is certainly interesting to consider the question: “What would we do if Microsoft was gone?”