California has blackouts and brownouts every time too many people run their air conditioning at the same time. Imagine the power outages that would happen every night at 6pm, if everybody drove a plug-in hybrid!
Chaos. Total nightmare.
I've been trolling some power plant engineer forums, who are little concerned about the emergence of electrical cars. America's electric grid cannot handle a world with electric cars! Especially out east, where one sneeze can take out the eastern seaboard. It will take billions of dollars over the next 20 years to get the grid ready for an all-electric car force. The power plant engineers think that the current grid can only support 10 million plug-in vehicles... and even that requires letting the power plants control when you get to plug in your car!
Some on the forum suggested diesel... but being power plant engineers, they threw out some stats about how electric motors are always more efficient than diesel... True, but that's not looking at the whole picture. I dug through some data from the EPA and the US Department of Energy (and then updated Wikipedia's diesel engine page). Here's what I found:
- The drive motor on an electric car is about 85% efficient... but where does that electricity come from?
- In the US, 75% of the electricity comes from coal. So how efficient is that power plant? UPDATE: Coal is only 50% according to a new study.
- A combined-generator power plant runs at about 50% efficiency, meaning about half of the energy is lost as waste heat. Wind, solar, and hydro power plants can do better, almost 80%. However, some small and/or old plants are only 30% efficient.
- Also, all power plants in America lose some power between the plant, and your home. This is called line loss, and can be anywhere between 3% and 10%.
- Diesel engines have a theoretical efficiency of 75%
- Current diesel engines are 45% efficient.
- Future diesel engines could be 55% efficient by 2010.
Interesting... This all spells out bad news for electric cars. Best case scenario, an electric car is 65% efficient. Realisticly, you could expect 40%, but the worst case is 23%. In contrast, you could realistically expect 45% efficiency with diesel now, and 65% efficiency 20 years from now isn't so crazy. There is some "line loss" for diesel, meaning that it takes energy to get the diesel tankers to the filling stations... but I'd wager its under 5%.
To put it another way, in order for electric cars to be greener than biodiesel, we first need to upgrade our electric grid infrastructure, and also switch every power plant to solar!
Don't get me wrong, those are good ideas. Yes, we should go to solar. Yes we should upgrade the infrastructure. But that's a 20-year project folks! Stop wasting precious brain cells solving the electric car problem! It's solved enough, OK? Let the grid catch up, work on clean diesel for 10 years, then switch back.
UPDATE: Domenick pointed me to a DOE study on off-peak capacity, which contradicts some of what the power plant engineers said on the forum. This analysis claims off-peak capacity could fuel 73% of the fleet of US cars, truck, vans, etc., if they were plug-in hybrids (not pure electric). However, some parts of the West and Pacific Northwest -- where the market for electric cars is highest -- would not be able to handle the load due to reliance on hydropower. It also assumes that the grid would operate at near capacity 100% of the time -- something that has NEVER been done, and has risks:
Even though we analyzed today’s grid with today’s LDV fleet and driving behavior, we applied several assumptions about the operating procedures of the entire electricity infrastructure, in which the grid has never been operated.
Its a good first-pass at a feasibility study, but we need experimental data to verify the grid can handle the load... hopefully before California and Washington get hit with rolling blackouts in 2010.
Michelle sent me an interesting article: Enterprise Search Just not there Yet. She and I both agreed... it is a terrible analysis.
To sum up, a lot of people are complaining that their enterprise search appliances aren't working right. Why doesn't it work like Google? they all say... I can always find relevant information on Google! Well, I got news for you:
People get paid LOTS of money to make sure Google can find their content.
People have meetings about getting high rankings... they hone their content. They obsess about keywords, and making sure the content is written in a readable format. They obsess about URLs, and make sure other pages link to this content. They register with a bunch of indexes, catalogs, and online yellow pages to boost relevance. They set up whole web sites for specific topics, neatly organized with clear, browsable topics. They hire very expensive specialists in SEO, and information architecture. In others words, internet content creators actually care if people find their content!
Google has it easy...
In contrast, how many enterprise employees obsess about findability, browseability, proper language, or keywords? How much of their content is even intended for an outside audience? How many of them even bother to enhance their content with useful metadata like title or comment? How many of them actively promote their content, and ask people to link to it?
Pretty nearly zero...
Without this effort, not even Google's laudable algorithm can find useful content in the enterprise... as is evidence by the general disappointment of the Google search appliance. No auto categorization engine can save you. No search engine will rescue you. No matter what people would like to believe, no software can ever replace a human being who actually gives a damn.
So... how do we fix the enterprise findability problem? It won't happen until people start caring both about them being able to find others' content, and others being able to find their content. I suggest you take advantage of the natural competitive nature inside humans... cash incentives might backfire, but nothing motivates people more than "your hit count is below average."
Start publicly ranking people on how findable their content is, and I guarantee that things will improve.
So I was reading up on John Newton's impressions of the Enterprise 2.0 conference a few weeks back... he was frustrated by the lack of a unifying definition of just what it was:
this doesn't mean that there was a lot of clarity on the meaning of the term Enterprise 2.0 at the conference. Although Web 2.0 had no less than Tim O'Reilly and John Battelle to define what that term means (barely), Enterprise 2.0 has no such authority. Consensus says that it is just Web 2.0 for the enterprise. However, researching the concept a couple of years ago, E2.0 is about taking the social aspects of Web 2.0, collaboration, social networks, user contribution, wisdom of crowds and social tagging and voting and applying it to information, documents and content in the enterprise
Interesting... blogs allow anybody to speak on a topic, and report news... Wikis allow anybody to take part in creating an authoritative knowledge repository... social networks allow people to bypass hierarchy structures and get things done by making "friends" and "connections" that want to help you.
Web 2.0 fundamentally means the end of the expert, but it took two "experts" to define that.
How deliciously ironic...
In contrast, since there is no accepted "expert" telling us what Enterprise 2.0 is, and since we're all just a bunch of amateurs fumbling towards the right answer, then Enterprise 2.0 is actually more Web 2.0 than Web 2.0. We know a fundamental change is occuring, we just aren't quite sure what it will look like when we're done.
Well, then... I guess the right thing to do is sit back, and lets these guys fight it out for a while. Let the self-anointed ones battle for mindshare, let the answer present itself, and then come up with a definition.
I'll throw in my two cents next week...
I love of energy... I always thought environmentalists got it wrong about energy. The problem isn't overconsumption, its unsustainability. So, go ahead and drive your Hummer, as long as it runs biodiesel from sources like algae or bacteria. If Big Oil was sharp, they would stop denying global warming, and embrace new carbon-negative oil technologies before the high tech venture capitalists steal all their business...
To add insult to injury, it seems that some prominent scientists want to put Big Oil on trial for global warming. At first, I believed that these kinds of trials would go exactly nowhere. Until I found out about one case backed by a dream team of trial lawyers: Steve Berman and Steve Susman.
The former was the lead lawyer representing 13 states against Big Tobacco in their historic defeat in the 1990s. The latter was the man who defended Big Tobacco. Now, they have teamed up and are taking on Big Oil, with pretty much the same strategy...
The Atlantic outlines the logic of the case quite well. There have been dozens of lawsuits against Big Tobacco, dating as far back as the 1950s. The plaintiffs were all the same -- people who got addicted to cigarettes, and got health problems, and were now suing the tobacco industry for selling an unsafe product. Early anti-tobacco lawsuits all ended the same way: the judge would declare that every consumer product has some danger, but its not the judge's responsibility to decided an acceptable level of safety.
Defining what is an "acceptable level of safety" is up to Congress... who are always on top of things...
This of course led Big Tobacco in the past -- just like Big Oil right now -- to funnel millions of dollars to "skeptical" scientists, and use them to pass off PR as genuine research... and use that to influence congress and the media into inaction. Not to mention the millions in campaign contributions, free trips, lobbyist jobs, etc. etc. etc.
Unfortunately, Big Tobacco finally realized the flaw in that plan:
- When you pass of PR as genuine scientific research, it is a lie.
- When you lie about consumer products you sell, it is fraud.
- When you defraud consumers, class action lawsuits are not far behind.
- When you get sued, you have to produce old memos, emails, and data relevant to the case... which are usually very incriminating
The Steves' plan is not to claim that oil is causing "too much harm." The plan is to prove that Big Oil used both licit and illicit means to downplay the actual harm of their product, whatever that harm may be. Essentially, when companies engage in fraud, they make it impossible for a consumer to make a reasonable choice about whether or not to use their product... and congress has a long list of laws against that...
Essentially, even if oil is 90% safe, if the Steves can prove that Big Oil claimed it was 95% safe, and that Big Oil downplayed evidence to the contrary, then Big Oil is guilty of both fraud, and conspiracy to commit fraud. That exact tactic brought down Big Tobacco, and it seems like it would be pretty easy to do the same to Big Oil...
I, for one, am curious to see how all this pans out...
The third installment of Where The Hell Is Matt is available on YouTube... and its the best one yet:
I first heard about this from April, who works at Google with Matt's girlfriend. This guy Matt has a dorky little "annoying dance" that he would use form time to time. One day, he quit his job, and traveled around the world with some friends. At the request of a buddy, he did his annoying dance on the streets in Vietnam, and he filmed it.
Then he kept doing it... all... over... the... world...
He put his first video up on YouTube, and it slowly became a huge hit. The Stride Gum offered to pay for his plane tickets, and let him make a second video. I especially like the outtakes. As you can see, he took a slightly different approach for the third one... I liked India and Korea the best.
AIIM sent me an email about their new social networking site for ECM folks, named Information Zen. Its built on top of Ning, like a lot of other community sites I belong to. Mancini is on there, and its probably only a matter of hours before Billy is up there too.
I like it a lot more than the standard AIIM site... I hope they move more of their content over. They have videos, groups, and forums, all broken down by ECM aspect: records management, enterprise search, content management, eDiscovery, etc.
Should be a good place to get community help with strategic ECM questions... it also might be good for unbiased information about ECM vendors: how tough is it to set up, deploy, maintain, customize, etc.
Seems to be growing fast... I joined, then I wrote this blog post, and in that time they got 6 new members! Over 600 members in a few hours... not bad, AIIM!
So I was finally updating my Tortoise SVN client for Subversion... that annoying little windows has been popping up for months, so I finally clicked on it. It took me to their blog... which at first confused me a bit:
Notice the inherent problems with using Google Ads on a site with poor usability! This is supposed to be the splash page for the upgrade, and what happens? I'm greeted with two links to download Subversion clients, neither of which are Tortoise SVN! Once you click the link to read more about the blog post, you get some helpful download links.
But I gotta say, those two other subversion clients probably steal a LOT of traffic and downloads from Tortoise SVN.
Is it just me, or are URLs totally backwards? For example, take this email address:
Nothing too odd... the email is is going to bob, who works in finance at the company. Not many folks do email addresses like this, they might instead do email@example.com, but I did it that way to compare it against a typical URL:
Nothing too odd there, eh? You are going to the blog for the company, the article named my-hands-are-bananas, published in June, 2008.
What always bugged me is how they mixed up the order. A URL is supposed to be directions to find information... and directions always start off general (head east on I-94) and end up very specific (turn off the paved road and stop at the fifth pink trailer home).
But URLs totally mix up the order:
Putting directions in that order makes about as much sense as these directions: turn left at reception, go to this company, go to France, then make a right.
A properly consistent URL should actually be structured like so:
Adding to the oddness... things like .com and .org are called top-level domains. Yeah... it really makes sense to call something "top" when actually its on the "bottom."
Louis in the comments suggested that maybe this would be even better:
That would would sure make type-ahead URL matching a hell of a lot easier...
Attention internet: please change.
Science Daily reported on a new paper on this topic called “The Hidden Perils of Career Concerns in R&D Organizations”. The problem can be summed up like this:
- Great developers want to demonstrate their talent, so they create highly sophisticated code that does amazingly complex things, which might not actually be what the customer needs...
- Terrible developer want to mask their incompetence, so they create enormously obfuscated code that nobody can understand so they must be called upon to make any changes...
Well that sucks... according to the authors, both groups apparently have a strong incentive to make overly complex code! At least they do so in the majority of software development firms. I'd add a third point:
- High maintenance developers need validation that they solved a tricky problem, so they force other developers and users to do complex configuration and initialization of their code to force them to appreciate the complexity of the problem...
They say the solution is more short-term incentives tied to the success of a project. I got not problem with that... but when it comes to code complexity, "success" cannot be determined for years after product delivery. Only after people need to patch, upgrade, and modify the solution can you really tell how successful you were.
I'm a fan of the old fixes: peer review, customer usability tests, and no code ownership. All three encourage simplicity, and discourage needless complexity.
UPDATE: Lively debate in the comments thread... so I wanted to update with my latest revelation. Great developers only write overly complex code when they don't get recognition of their talent. If they don't get verbal or monetary recognition from their manager and/or peers, they will seek out ways to prove their excellence. In other words: bafflingly complex code. They also do so because of honest mistakes: their code makes sense to them, so they believe it makes sense to others. The curse of knowledge, if you will.
So what's the ultimate solution?
- Peer review all code for new developers: both great and not so great. The time you spend up front will more than make up for easy code maintainability in the future.
- Have a training program in place to mentor the less skilled developers. Make sure they know they add value to the team, its just that they don't have enough experience yet to solve the tough problems.
- Make sure your highly skilled developers get the recognition they deserve... especially if they are working on a project that is beneath their skill level.
- Let highly skilled developers spend some spare time helping out open-source side projects, if their current task is too tedious or too simple to occupy 100% of their time. That will give them the recognition they need, and you still get your project on time.
- Transfer ownership of code about every six months. This will ensure that code makes sense to everybody.
- Force the developers to watch people try to use their solutions. In silence. Let them see hard proof how tough their systems are to use, maintain, or customize, to encourage them to solve the usability problem as well.
This post is already too long... I may expand on this at a later date.
All right... the Twitterverse is all up in arms about how crashy it is, and the lack of a business model... well, at least Jake and Radar are... so I figured I'd throw in my 2 cents, and solve both problems at the same time:
How to make Twitter crash less:
- Ditch Rails.
- Ditch Ruby.
- Rewrite it for Python / Django.
- Use Google App Engine for hosting.
Done and done. Pownce has proven that it way easy to redo everything Twitter did (but better) using Django... and in a remarkable short amount of time. Plus, if you use Django, you can port your entire system to Google App Engine, and get insane scalability and uptime for cheap. Google might even be a willing partner for such a high-profile client with such widely known scalability problems...
I always thought Rails was the wrong tool for Twitter... I'm sure the pragmatic programmers would be all up in arms if Twitter ditched their favorite tool... but who cares? Using the same tool for every job is woefully unpragmatic. "But Rails can do it! Rails can do it!" Ugh... At times like this I let Chris Rock do the talking:
Sure, you can do it, but that doesn't mean it should be done! You can drive your car with your feet if you want to, that doesn't make it a good idea!
Now, regarding the business model, there are these options:
- Charge $10 per year for people who tweet more than 5 times per day.
- Engage businesses, sell them "Twitter Appliances," and train them how presence can boost communication and productivity.
Seems pretty damn straightforward to me... at least, that's what I'd do if I had a brand like Twitter. Move into more of an evangelist model, teach people to collaborate with presence, and get into the enterprise before somebody else beats you to the punch. Heck, they could even sell enterprisey books, and be the first "sexy" enterprise app. I'm baffled why they haven't already done so.
In the meantime, I've moved on. Check me out on Friend Feed.
UPDATE: Garrick posted on another twitter business model... The scalability problem is not due to the number of tweets per day, but in the number of followers you have. Some people have thousands of followers, so one tweet per day from a popular person consumes more resources than a friendless one tweeting every hour. Therefore, perhaps you should charge people to be followers? I'm not 100% sold, because that would discourage popularity. Its also vulnerable to Twitter syndicators like FriendFeed... Why should everybody pay $10 to follow Scoble on Twitter? Just follow his FrendFeed instead.
I first heard that Larry Ellison was the inspiration for Iron Man from fake Steve... apparently Robert Downey Jr. studied video tapes of Larry in order to develop his billionaire persona... complete with goatee, mussed hair, Jesus hands, and everything! Skeptical? You can view video evidence yourself.
Well... it now seems that Oracle is getting in on this reality blur as well...
In my mailbox today, I got the Oracle partner newsletter about a cross-promotional campaign with Marvel. They are promoting the new Marvel Trilogy, starring Iron Man. The tagline is Hardware by Marvel, Software by Oracle.
Since Marvel did the graphics, the advert looks pretty nifty. Its a nice deviation from the standard Oracle marketing material: red, white, and boring... but this is just gonna make conspiracy nuts suspicious.
So what do you think Larry really does in his spare time?
I wouldn't be surprised if he had his own flying suit... but I'd be pretty shocked if it turned out he used it to battle warlords in Afghanistan...
I just love the exasperated, one-eye-open gaze...
Oracle is now is doing a quarterly customer webcast to keep folks up to date about the latest changes in the product line. The next one will be June 5, 2008 at 9:00 a.m Pacific Time. If you'd like to attend, you need to register with Intercall:
Its for customers and partners only... so be sure to use your company email address... you also might want to read more about getting Stellent ECM announcements...
Apologies for the esoteric post, folks... but this is kind of important... Two folks from Yahoo, plus two folks from UCLA, have just released a paper on the ACM about a new kind of parallel algorithm: Map - Reduce - Merge.
If you don't know about MapReduce, its the algorithm that makes most of Google possible. Its a simple algorithm that allows you to break a complex problem into hundreds of smaller problems, use hundreds of computers to solve it, then stitch the complete solution back together. Google says its excellent for:
"distributed grep, distributed sort, web link-graph reversal, term-vector per host, web access log stats, inverted index construction, document clustering, machine learning, statistical machine translation..."
bla bla bla... but MapReduce can't do joins between relational data sets. In other words, its great for making a search engine, but woefully impractical for virtually every business application known to man... although some MapReduce-based databases are trying anyway (CouchDB, Hadoop, etc.)
UPDATE: Some Hadoop fans mentioned in the comments that MapReduce can do joins in the Map step or the Reduce step... but its highly restrictive on the Map step, and sometimes slow in the Reduce step... joins are possible, but sometimes impractical.
Well... this latest twist from the Yahoo folks fixed that: they claim MapReduceMerge now supports table JOINs. No proof as of yet, but there are a lot of folks staking their reputation on this... so its a fair bet. The Hadoop folks seem to be experimenting with MapReduceMerge... so if they spit out some new insanely fast benchmarks, my guess is that this is for real...
What does this mean for relational database like Oracle? Uncertain... but I did hear a juicy rumor about 15 months back: some guy from Yahoo sat down in a room with Oracle's math PHDs, and spent a day discussing an algorithm for super-fast multidimensional table joins... like sub-second performance on 14-table relational queries, with no upper limit. My sources told me the Oracle dudes were floored, and started making immediate plans to integrate some new stuff into their database. The Yahoo connection made me think this might be the MapReduceMerge concept...
Coincidence? Perhaps... but a juicy rumor nonetheless.
Well, this is unfortunate... CMS watch is reporting a rumor about an Oracle Wiki incident. An Oracle partner named Sten Vesterli posted some less than positive feedback about WebCenter on the Oracle Wiki... was promptly flamed by an Oracle product manager, then had his postings removed:
I placed some of the description and the pro/con discussion from my upcoming paper comparing Oracle development tools on the Oracle Wiki. And just like when I posted something not unambiguously positive about Oracle WebCenter on the Wiki, I was immediately flamed by an Oracle product manager, and any trace of negativity edited out of one of my pages.
Oops... looks like a Web 2.0 malfunction.
Second, Sten Vesterli is an Oracle ACE Director, like me. That means we have multiple channels for criticism if we don't like the feature set of the product. We're expected to extend Oracle some level of professional courtesy when we give criticism. I occasionally point out the flaws in Oracle products, but I almost always offer a workaround, and I don't put them on places as high profile as the Oracle Wiki... Naturally, some folks at Oracle would feel Sten was being a tad rude...
But ultimately, a wiki is the wrong place for criticism. Criticism almost always contains judgment, which by definition violates the neutral point of view policy that is on all wikis -- even Wikipedia. As Justin Kestelyn says:
A wiki is not the place for opinion, because opinion does not invite editing, only response.
The wiki was probably the wrong forum for Sten. Want to rant about WebCenter? Then your text belongs on a blog. Oracle's policy should simply be that: criticism belongs on your blog, not on our wiki, or any wiki. Then they should monitor pages that are "hot topics," and delete anything that looks like a rant. Clean and simple.
Hopefully Oracle doesn't try to lock down access to the wiki because of this drama...
UPDATE: Justin got in touch with Sten to figure out what really happened, it didn't seem to involve WebCenter, and CMS Watch blew it all out of proportion... The wiki is thankfully back to business as usual.
Microsoft has been pushing a new XML standard for word processing, OOXML. Its generally regarded as unnecessary, not to mention overly complex and weird... so much so that not even Microsoft Office 2007 passes conformance tests.
Anyway, the world was a bit shocked when Norway voted YES to make it an ISO standard... OOXML looked dead in the water, until this shocker gave it new life... so one guy on the 30-person committee decided to give the inside scoop:
...Halfway through the proceedings, a committee member had asked for (and received) assurance that the Chairman would take part in the final decision, as he had for the DIS vote back in August. It now transpired that the BRM participants had also been invited to stay behind. 23 people were therefore dismissed and we were down to seven. In addition to Standard Norway’s three, there were four “experts”: Microsoft Norway’s chief lobbyist, a guy from StatoilHydro (national oil company; big MS Office user), a K185 old-timer, and me. In one fell swoop the balance of forces [about rejecting OOXML] had changed from 80/20 to 50/50 and the remaining experts discussed back and forth for 20 minutes or so without reaching any agreement...
...The VP thereupon declared that there was still no consensus, so the decision would be taken by him. And his decision was to vote Yes. So this one bureaucrat, a man who by his own admission had no understanding of the technical issues, had chosen to ignore the advice of his Chairman, of 80% of his technical experts, and of 100% of the K185 old-timers. For the Chairman, only one course of action was possible.
Sounds like election fraud to me... if true, this could cause a pretti nasti backlash.
UPDATE: It looks like Brasil, India, and South Africa are challenging this vote. OOXML looks even deader:
Microsoft said last week that it does not expect to make its current generation of office productivity software, Office 2007, compliant with the ISO/IEC version of the OOXML standard... Instead it will issue a patch allowing that software to read and write files compatible with the rival OpenDocument Format, which has already been adopted as standard ISO/IEC 26300.
I warned you... them moose can get pretti nasti...
So there's this whiny little gated community here in the Twin Cities called North Oaks. When I was a kid, I had a white hot hatred of North Oaks... and now I feel justified. Apparently, they're suing Google to keep them off the maps.
Who do they think they are? Area 51?
Apparently, all the roads are privately owned, and there's a no trespassing notice for anybody who ventures into the town of 4,500 people. Google does remove maps for military bases, but thei is the first time an entire frigging town has requested removal.
I'm curious to see how this pans out...
Every wonder why erroneous loudmouths get more airplay than the rest of us? I'm not talking about radio shock jocks, or political pundits, but technology bloggers as well. When you let your emotions run wild, and make crazy (probably false) posts, you usually get a bigger fan base.
Why the heck would that happen? Why do blogs with valid, rational discourse languish, whereas those who are wrong, wrong, just plain wrong get lots of viewers and comments?
Jeff Atwood over at Coding Horror had a recent epiphany along these lines. His technology blog is a little haphazard, filled with lots of good nuggets -- as well as plenty of corrections in the comments. Jeff conforms to the philosophy of strong opinions, loosely held. He says he's not an "expert," he's an amateur. But since software is such a new industry, pretty much everyone is an amateur... And, unlike most folks in the software industry, he's not afraid to admit it.
I think it goes a bit deeper than that...
When Jeff says something that is just plain wrong, it makes people angry, which makes them do something. They try to be the first to correct him in the comments, or it starts a conversation on other blogs that link back to him. His writing is humorous, and I've linked to some of his more controversial posts (such as Rails Is For Douchebags), but that doesn't mean his opinions are valid...
You don't get a popular blog by being correct: you get it by being wrong in a way that makes people react. If you were right, you'll get a decent reputation, but your zone of influence will be smaller. People easily forgive you for being wrong... but they never forgive you for being right.
Linkbait, flamebait, trollbait, whatever you want to call it... it works wonders to boost popularity.
UPDATE: Just to be clear, I like Coding Horror, and I mentioned in the comments below. I just wanted to make the observation that generating an emotional response seems to be the better path to blog popularity... for what its worth.
CMS Watch has some interesting reflections on EMC World and Documentum... apparently, EMC still has decent Enterprise Content Management products, but there's a real lack of enthusiasm in EMC about the whole thing:
Under the covers there remains some good technology and some good technologists, but there just doesn't seem to be the enthusiasm in the rest of EMC to really get behind it. One way of classifying these two groups is that they consist of the remnant Documentum products (built and acquired) over the years. We see many elements of the collaborative DM that Documentum majored on in the past in today's Knowledge Worker division, alongside the updated eRoom offering. In the Interactive Media group we see the old Bulldog DAM products given a fresh coat of paint. Both looked fine in the demo, but in talking to broader EMC sales staff, there was little interest or knowledge of these areas.
The CMS Watch article is also an interesting intro to Content Management and Archiving (CMA)... which seems to be the path that a lot of Enterprise Content Management vendors seem to be taking. Oracle's plan to achieve CMA is with a nice blend of Stellent and their Universal Online Archive... I'll go into more depth in my next book ;-)
As Billy noted with some statistics, archiving is a big deal for a complete ECM solution... It seems like some folks at Documentum "get it," but the jury is out whether the EMC folks will listen...