At least, that's the opinion of Joel Spolsky...
...this search for the holy grail of program quality is leading a lot of people to a lot of dead ends. The Windows Vista team at Microsoft is a case in point. Apparently -- and this is all based on blog rumors and innuendo -- Microsoft has had a long term policy of eliminating all software testers who don’t know how to write code, replacing them with what they call SDETs, Software Development Engineers in Test, programmers who write automated testing scripts.
The old testers at Microsoft checked lots of things: they checked if fonts were consistent and legible, they checked that the location of controls on dialog boxes was reasonable and neatly aligned, they checked whether the screen flickered when you did things, they looked at how the UI flowed, they considered how easy the software was to use, how consistent the wording was, they worried about performance, they checked the spelling and grammar of all the error messages, and they spent a lot of time making sure that the user interface was consistent from one part of the product to another, because a consistent user interface is easier to use than an inconsistent one.
None of those things could be checked by automated scripts. And so one result of the new emphasis on automated testing was that the Vista release of Windows was extremely inconsistent and unpolished.
This isn't to say that automated tests are useless... far from it. However, if your requirements for "tester" are the same as your requirements for "programmer," then you're not really doing testing. All programmers test their applications... that is, programmers test applications in the way the application is supposed to be used. They are not bothered with inconsistencies, usability, or how the product "feels." Your target market cares... but your developers usually don't.
For that, you absolutely positively need testers who are non-programmers. You need feedback from people like your target market, to give you information on how to make the product better. Even if your target market is nothing but developers -- say, you're making a new programming language -- you still need non developers to test your products... there's simply no better way to test intuitiveness.
Most importantly, you need to teach your programmers that such feedback should be taken very seriously... although always with a grain of salt. Requests for better usability should always be followed... whereas requests for new functionality need more scrutiny. In my experience, highly technical feature requests frequently mask the true needs of the client. You should understand the goals of your clients, and give your developers the freedom to help achieve those goals.
As my database professor John Carlis always said... the client doesn't always know what they want, but they are very fast learners!
UPDATE: Lots of great quotes in the comments... my favorite: "Automated tests ensure the software is working as designed. Human tests ensure the design is working."