Articles about computer software, hardware, and the internet.

Where are all the Oracle ECM Verticals?

I've been wondering for some time why there aren't more 3rd party vertical applications in the Oracle ECM ecosystem... Other ECM vendors have verticals... legal, health care, technical publications, yadda yadda yadda... but not as many for Oracle... especially from 3rd parties. I've helped three different organizations create hosted solutions that were vertical apps... but very few packaged apps that slap on top of ECM.

Why?

I had hoped that after writing my book on developing apps with Oracle Enterprise Content Management, that more folks would be making them. Even if you don't use the built-in UI framework, I had hoped that people would us its Service-Oriented Architecture to support some kind of industry-specific web application. I know of several Oracle partners who have industry-specific experience, and could make a killer app for those industries, but choose not to.

Why?

Maybe its because the folks who know the Oracle ECM APIs the best are small firms, and making a vertical is a bit risky. Also, as Ken Jorrison always reminds me people who buy enterprise software place a HUGE premium on support, so any such initiative would need a solid 24x7 support infrastructure. Couple that with the fact that Stellent talent is hard to find, and there's a lot of work out there... so the best short-term business decision would be to go after the consulting dollars...

Do you work for a company that sells a pre-packaged Stellent vertical application? If so, leave a comment here... I'd like to hear about them.

Do it in Under 300 Milliseconds, or You Are Painfully Slow!

I'm working on a pet theory about "slowness" in user interfaces... triggered in part because of issues in a new-ish Oracle product that shall remain nameless...

I'm sure other UI gurus have noticed this before, but when you are clicking buttons or other UI tasks, and it takes longer than about half a second, you will perceive it to be "slow." Why? Who knows! Is it a hard and fast rule? Or just an approximation? I think the root of this answer lies in neurobiology...

I recently devoured the book The Brain That Changes Itself. Highly recommended... It contains some amazing stories about a phenomenon called Neuoplasticity, or essentially the brain's amazing ability to re-wire itself. They told many stories about people with learning disabilities, strokes, cerebral palsy, autism, or even blindness, and how these people "rewired" their brains to heal themselves!

In one section about amputees, they mentioned that it takes 300 milliseconds for a brain signal to reach the hand. That made me think... I wonder if there is a co-relation with that number, and the threshold for when people get annoyed with "slow" computers? Maybe your brain "thinks" that the computer is actually a part of your body, and if it doesn't respond in 300 milliseconds, you get the feeling that something is wrong?

In a section about pain, they emphasized the fact that your brain doesn't "know" where your body ends and the world begins. For example, you can perform the following experiment to prove it to yourself:

  • Place your right arm on a table, behind a screen so you can't see it.
  • Place a fake rubber arm in front of the screen, aligned with your arm, and so you can see it.
  • Have an assistant gently stroke both the rubber arm and your arm in the same way for a few minutes.
  • Next, have them just stroke the rubber arm.
  • Your brain will actually "feel" your arm being stroked when you see the rubber arm being stroked!

This doesn't just work with rubber arms... it also works if you just stroke the table in front of you! Doctors have used similar kinds of trickery to cure amputees of phantom pain that they "feel" in their amputated limbs. Chronic muscle pain might have similar roots, but they didn't go into it much.

Anyway, since the brain doesn't "know" where the body ends, it probably reacts as if the computer is a part of your body. In other words, if your brain wants to make the computer do something, and you don't get feedback within 300 milliseconds, it might trigger some anxiety because it "thinks" something is wrong with your body! It doesn't know that its just a computer... your brain is probably wired to trigger genuine anxiety when your computer doesn't behave as naturally as your hand! In this case, something should happen in under 300 milliseconds.

In practice, this means many things for better user interface design... but at the very minimum it means that computers should give feedback at least every 300 milliseconds. If something can be done in under 300 milliseconds, then it always should. If not, then you absolutely must give some kind of feedback that stuff is happening: a spinning wheel, a progress bar, maybe dancing frogs.

Either way, 300 milliseconds is a pretty good rule of thumb to ensure your users avoid feeling anxious and ill while using your products...

Who Wants to be @OWASP on Twitter?

So I've been following the Open Web Application Security Project (OWASP) project for some time... I was just reading about the next few Minneapolis OWASP meetings, and was kind of shocked to see Richard Stallman on the list of speakers for the October event...

Anyway, as I wandered over to Twitter, I noticed that nobody claimed the OWASP account yet! So I swiped it... Even tho I'm not really a hard core OWASP guy. I don't run a chapter, nor is web app security my job... its more like a hobby ;-)

So basically, I was wondering who "deserves" to have this Twitter account? I have no clue what to do with it. Meeting announcements? Full disclosure tweets? Open mockery of hacked applications?

(hat tip: Sam)

The Origin of "Bex"

A lot of people ask me, "where did you get the nickname 'BEX'?" Well, it all started about three million years in the future.

Let me explain...

I always was a big fan of British comedy... about fifteen years ago I came across an odd sci-fi British comedy called Red Dwarf that I particularly enjoyed. It mostly took place in the distant future, and it has quite the cult following, even today. In one episode, the main character Dave Lister mentioned that his sports hero was a chap named Jim Bexley Speed. He liked him so much he named one of his sons "Jim," and the other "Bexley."

I thought... Bexley, that's a pretty interesting name...

So, I started using it as a pseudonym. I shortened it to "Bex" and started using it as one of my internet nicknames... along with more unusual ones like Grin, Slosh, and Thudwallow. Naturally, back then nobody called me Mr. Bex any more than they called me Mr. Thudwallow...

Anyway, one year I was in college, there were too many Brians in my dorm. There were like 5 in a group of about 50. My dormmates decided I needed a nickname. One of the geekier ones asked me for my IRC handle, and I said "Bex." They liked it, so it stuck. It didn't hurt that my favorite beer at the time was Beck's Dark, although it did lead to some debate over the proper spelling of my new nickname...

After I left the dorm the following year, nobody called me "Bex." Apparently outside of my dorm, the ratio of Brians to non-Brians was at an acceptable level.

A few years later, I started working at Stellent... this was about 8 years before it was acquired by Oracle. Once again, there were too many Brians. I believe there were 4 in the company of about 100... include two in my 6-person dev team! Just like before, they asked for my handle... and I replied "Bex."

They liked it... so it stuck. I eventually used it as my email address (bex@stellent.com), I put it on my name plate for my cube, even on my business cards.

It probably would have remained a Stellent-only nickname, however I spent so much time building the Stellent community -- including moderating user groups, writing 2 books, and numerous conference presentations -- that the name recognition started to grow. More people knew me as "Bex" than knew me as "Brian."

So... now I use it or a variation whenever I can:

And now I never have to concern myself with the ratio of Brians to non-Brians ever again...

Oracle Community Call For ECM

In case you didn't get the invite, Oracle's quarterly ECM community seminar is in about three weeks...

Americas / EMEA time zones: Customer Update
September 10, 2008
9:00am US PDT / 12:00pm US EDT / 16:00 GMT

You can register early... and just like last time, this is for Oracle customers and select partners only... There's a repeat webcast for Asia-Pacific at 7pm US Pacific time (12:00pm Sydney AEST, 10:00am Singapore).

If you missed the previous ones, they're up on Metalink. The last one covered how to find Stellent resources in Metalink... as well as Universal Online Archive, Captovation, what's going on with Verity, important patches, and general news items.

Put it on your calendar now!

Work Life Balance?

James asked when was the last time you saw somebody achieving work-life balance. I wondered if computer geeks could even achieve work-life balance? Hell no! Not the good ones anyway...

The good ones think about solving problems every hour of the day. I probably talk shop in the off-hours at least 20 hours per week with Alec, Michelle, and other folks... Is this a bad thing? I think Isaac Schlueter says it best:

I can’t work just 8 hours a day. Either you ride the biorhythm, with its highs and lows, and capitalize on every bit of go-time that your brain gives you, or you crank out boring hours for your handful of dimes. “Healthy work-life balance” is for bank tellers. An artist doesn’t stop being an artist when he goes home.

Amen to that... since software is a creative process, perhaps more similar to gene splicing than engineering, work-life balance is nearly impossible. Such balance is for process workers, not knowledge workers.

Forget "Knowledge Management", Focus on "Context Management"

I was always bugged with the buzzword "Knowledge Management." Not because it is a buzzword... but because it appears to NOT be a buzzword. A buzzword should either be really concrete, or vague enough to lead to questions -- like "Enterprise 2.0." Instead, "Knowledge Management" is somewhere in the middle, and sounds like annoying advice:

  • The key to getting rich is making more money!
  • The key to winning races is going faster!
  • The key to a smarter enterprise is managing your knowledge!

As such, I feel that the very phrase "Knowledge Management" might have led people to ask the wrong questions, and implement the wrong solutions... I think Chuck Klein down here in Albuquerque said it best:

We don't need "Knowledge Management." We need knowledge capture and context management.

That puts it pretty well... the goal is to capture as much knowledge as you can, and store it safely and securely. At the same time, you need to constantly gather more and more context, so you know what information to get to which people, and when. Information without context is worse than useless: its merely clutter that wastes everyone's time.

Too many projects lose focus on the context management problem... some of the easy questions revolve around things like metadata and keywords, but that is rapidly becoming insufficient. As the amount of information you manage gets larger and larger, you need to ask a lot of hard questions before you can maximize the value of your system:

  • Who is the intended audience for this item?
  • Where does this item fit in my taxonomy?
  • If people like this document, what related items would they like?
  • How would people find this item, if your search engine did not exist?
  • Under what conditions should we archive this item to reduce clutter?
  • Under what conditions should we destroy this item to save storage space, and mitigate risk?
  • Who is the current user?
  • Where is this user, and how are they accessing the system?
  • What is the user's search, download, and feedback history?
  • What is the most effortless way to gather feedback from this user?
  • Based on this user's past behavior, what information are they likely to want next?

Some good Knowledge Management folks already ask these kinds of questions... but I feel that not enough clients understand what kinds of questions to ask. If we used more specific terminology -- like context management -- it gets people thinking about the problem in a very concrete way... and I feel would lead to better implementations.

Damien Katz Come Out Swinging Against REST

I'm never been a huge fan of REST... Its advocates bash SOAP for being too complex -- which I agree with -- but then they make all kinds of bizarre claims to shame people into dropping SOAP entirely. I've heard my own fair share of nonsensical pro-REST rants, including:

  • The web is REST, therefore REST is as awesome as the web!
  • HTTP POST is evil, but GET, PUT, HEAD and DELETE are awesome!
  • Create/Read/Update/Delete is all you ever need!
  • URL parameters before the "?" are okie kosher, but after the "?" they are evil!
  • SOAP's superior security model is irrelevant; HTTPS and basic auth is all people need!
  • Simplifying SOAP is not an option! We must scrap the whole thing!
  • HTTP is perfect! PERFECT I SAY! There is never a need to tunnel data through it!

Personally, I'm a fan of using HTTP POST with SOAP formatted data in order to alter resources (create, edit, delete, move). If all you want is to read data, then your system should support SOAP based responses to a standard HTTP GET request on a URL. It solves most of the complexity problems that people complain about, without ditching your SOAP infrastructure. Also, it allows you to use the exact same API for both your web interface, and your developer API.

Oddly enough, this is exactly how the Stellent server works ;-)

Anyway, recently tech geek Damien Katz -- the inventor of the RESTful database CouchDB -- has recently come out swinging against the pro-REST non-sense:

Sam Ruby claims Katz is fighting straw men... but I'm not sure Sam's argument holds water. Katz is simply calling bullshit on the #1 claim of most REST fanboys. Sam can't have it both ways: he can't benefit from his rabid fan base, and not publicly shame them for their falsehoods. If Katz is fighting a straw man, then Ruby should make it clear to his minions that the #1 claim they use to push REST is totally false.

Then... THEN we can see if the other benefits justify scrapping SOAP... or using REST instead of something with a less fanatical fan base.

Are You An Idiot?

Ryan Curtis just sent me a quick email saying that the Idiot Test 6 will soon be released... Once its around, I'll create a walk through just like I did for the other Idiot Tests... so the people who get stumped won't have to feel like idiots ;-)

I gotta say... Hosting the cheat sheets for the Idiot Test is kind of a double edged sword... On the plus side, I get a few more hits and links to my blog... On the negative side, a lot of my page views come from people Googling the word "idiot"!

Seriously! As of August 11, if you Google idiot, I'm number 389 out of 75 million sites. Not the top 100, but still rather remarkable. Also, idiot is the #2 keyword that people use to find my site from Google! Ten of the top 50 keywords people use to find my site use some permutation of the word "idiot"... its right up there with "stellent", "bex huff", "electric cars," "empathy," and "why does vista suck."

Good for page views... but man its tough on the ego...

UPDATE: great... as of August 25th, I'm now up to 287 out of 75.9 million. I wonder what it would take to hit the top 100?

Google Knol: The Wiki For "Experts" To Dumb To Blog

Well, Google has been hyping their new application, Knol over the past few weeks. I first heard about this last December. I was unimpressed then, and am unimpressed now:

Google seems to believe that wiki-quality blogs belong in what they call a knol, short for knowledge unit. It will allow viewers to rank the value of posts, and allow the "expert" to get credit for their work... Here's the thing: there's already tons of "knols" on the web... Not to mention, the entire heart of Google search -- the PageRank -- already works as a de facto "knol ranking" system. All Google Knol does is lower the bar for so-called "experts" who are apparently too dumb to set up a blog... Besides, technologies like Microformats can be used to embed metadata into any mundane content. Why not have a "this is a knol" microformat, put them on any web page on the internet, and let the Googlebot Spider do the rest?

A bit harsh, yes... but nonetheless true.

There are other problems with Knol. Firstly, Wikipedia is popular explicitly because there is no financial incentive! As somebody said at the Enterprise 2.0 kickoff, Wikipedia is one of the top 10 websites, and they never advertise. As a result, Wikipedia throws away $100 million per year because they don't want to turn into some ad-heavy hunk of junk. If they did, their viewership would plummet. As soon as Google pays people, or starts showing ads, Knol is dead.

Secondly, non-experts are some of the most important contributors to wikis! They challenge the "experts" to justify their claims, when the non-expert finds something that contradicts the conventional group-think. Also, non-experts rework articles to lower the bar of what prior-knowledge people need to have before they can understand the article. Understanding knowledge and teaching knowledge are vastly different things... many who have the first skill do not have the second, and vice versa. The former requires logic, the latter requires empathy, and its an exceptional person who both knows and teaches subjects above a 12th grade level...

Thirdly, Knol's founding premise is fundamentally flawed. The problem is not lack of experts, the problem is lack of transparency. Reputation systems are useful to determine whom to trust, but so are the links at the bottom of every Wikipedia page. Also, any controversial topic has a quite lively discussion page, filled with counter-examples... people asking for credentials, further proof, etc. Wikipedia just needs a bit more eye candy to call out sections that have been frequently edited, and by whom.

I just don't see Google Knol getting anywhere... eventually, the Knol experts will discover that running a personal blog is more fun and rewarding than letting Google get all your ad revenue... and linking to yourself as a Wikipedia citation just might get you more page views than a Knol would.

Six Weeks Till Open World...

Just six weeks until Oracle Open World 2008. I anticipate chaos. Fun and learning, yes, but also chaos...

I used their online schedule builder to sift through the 1700 sessions they were offering, and built a personal calendar... from a usability perspective the schedule builder is very much sub-par, but its vastly better than what they offered last year... I'm mainly focusing on the ECM track, plus a bit about Portals, Mashups, and a little bit of the WebCenter stuff.

I'm giving four talks this year, two in the general sessions, and two at the associated Open World Unconference... for those who aren't aware, an unconference is the technology equivalent of "open mic" night. Here's my schedule:

  • Monday 1pm-2pm: Enterprise 2.0, What it is and How You'll Fail, at Moscone West, 3rd Floor Overlooks #1
  • Monday 4pm-5pm: A Pragmatic Approach to Oracle ECM, at Moscone South, Rm 301, session 299246.
  • Wednesday 11am-12pm: Communication For Geeks -- How to influence your peers, your boss, and your clients, at Moscone West, 3rd Floor Overlooks #2
  • Thursday 10:30am-11:30am: Top 10 Ways To Integrate With Oracle ECM, at the Marriott, Golden Gate room C1, session 300043.

The first talk is essentially what I gave last week at the Enterprise 2.0 kickoff. The second one is mostly based on my upcoming book. The third is a repeat of the talk I gave at the MinneBar Unconference a few months back. The last one is a new twist on my "50 Ways to Integrate With UCM" talk.

Besides the sessions, there are also a handful of places to just "hang out," recharge your laptops, and escape the bustle:

Even tho Twitter ticked me off big time a while back, I'll probably be using it at this conference to keep track of folks, and let folks find me. Follow me on either Twitter or FriendFeed if you want to meet up at Open World.

XML End Tags Are Stupid...

Building on my observation that URLs are Backwards, I've decided that XML end-tags are stupid. For example, why do I have to write XML like this:

<head><title>foo</title></head>

Instead of just like this:

<head><title>foo</></>

Or even this:

<head><title>foo<//>

Huh? Crazy, I say... crazy! Coders have been dealing with generic end parenthesis, and generic end braces for decades... why complicate matters in XML? As Alec always says:

Your data format ain't better just because it has angle brackets in it...

Seriously, what semantic advantage is there? If you are creating XML of only minor complexity, its just a bunch of extra typing that don't add any value... if you are using complex XML then I might see the advantage... but in those cases you would probably use an XML editor which does all kinds of fancy code highlighting to make sure you can see everything... and the words in the end tag are superfluous anyway.

Besides... XML end tags are especially useless for HTML. Seriously... everything is DIV tags anyway. If your HTML is invalid, it usually doesn't help to know that you need to end an open DIV tag... the question is which one? Your only hope is an advanced editor, or decent HTML comment blocks.

Having the name of the tag in the end tag just makes the data format more bloated, and no more useful. I prefer JSON anyway:

{ "head": { "title": "foo" } }

Heaven...

Enterprise 2.0 Rant Available For All

Well, the reviews are in on my Enterprise 2.0 talk... and it was pretty well received.

...perhaps best of all, a deadpan delivery of "Enterprise 2.0: How You Will Fail" from the illustrious Bex Huff (an Oracle ACE Director). The latter was a source for some particularly good quotes, such as my favorite: "Where there is lack of shared purpose, information sharing leads to chaos." Amen, my friend.

Unfortunately, I was out of the room when they decided to do the schedule, so I went dead last at the end of the day. Serves me right for trying to network with Ajay Gandhi and Joe Strada. The audience dwindled a bit, but they seemed awake enough to get some value out of my talk.

In any event, I put the presentation up on Slideshare, and embedded it below. The viewer is kind of busted, so you might want to download it as well.

Unlike the other speakers, my goal was to reign in the excitement and buzz around Enterprise 2.0. I believe there is massive value in a company-wide E 2.0 initiative, but there will be equally massive stumbling blocks! Is your culture ready for it? Probably not. Almost certainly not. Therefore, you'll have to fail a few times before you'll see measurable value.

The only hope in getting value out of E 2.0 is by keeping an eye out for failures. Expect them! Embrace them! Experiment a little, and try new things anyway! Any Enterprise 2.0 initiative that cannot adjust in the face of massive failure will just be a waste of money. Its not just about the technology; Enterprise 2.0 is really about creating a culture of entrepreneurialism. You need this culture before you can find enterprise-wide value in E 2.0 tools.

Good E 2.0 initiatives are about tools that guide the creation of this culture, and are not just about getting everybody on the latest Web 2.0 gizmo. If your E 2.0 initiative revolves around getting everybody blogging, then you're just replacing email overload with blog overload... or wiki overload... or social software overload... or Twitter overload... Pick your poison. Departments with a good culture will find immediate value; others will find zero, or even negative value! You'll be spinning your wheels, but you wont be going anywhere.

I hope to give this talk again at the Open World unconference... Justin said that he'll be sending out an email blast so we can get registered on the wiki. That will make it 3 talks this year at Open World... see you there!

UPDATE: I'll be giving it at 1pm on Monday, Sept 22 at the Open World unconference. It will be at Moscone West 3rd Floor Overlooks, in case you'd like to see it live. I might change that time, if there's a lot of other stuff I'd like to see at that time, but I want to get it out of the way so I can enjoy the rest of the conference...

Cuil thinks I'm old...

So, following in the steps of other Minneapolis bloggers, I decided to try out the new hot search engine, Cuil. It bills itself as a "Google killer," and claims to already have a search index larger than Google itself.

that's pretty big talk...

Naturally, I started out with a vanity search, Brian "Bex" Huff, and I was a tad surprised with the results...

Just for the record, no, I'm not some angry 60-year-old dude. That is, unless Cuil is such an amazing search engine that it's indexing content from the future. Stupid Cuil...

Oracle Open World Around The Corner...

Oracle Open World starts September 21st... That's just 2 months away! Woot! Goosebumps, I tell you.

The conference last year was a bit chaotic for me... when I'm presenting, I never feel like I have enough time... And with 60,000 attendees, it can get a little crazy. This year will probably be similar. I'm doing a variation on my introduction to integrating with Stellent talk, because it was one of the top ten talks at Collaborate 08 a few months back. Nice... The IOUG folks were kind enough to score a spot for me, and I'm gonna soup up the talk a bit. Andy and I will probably also giving an ECM business-strategy presentation that aligns with our upcoming book.

Then I can relax!

I haven't looked at too many of the other presentation yet, but the ones that won Jake's "pick a session" contest on Oracle Mix all look good. I'm glad to see that Dan Norris will be presenting on how to be an Oracle ACE... also, Lonneke Dikmans will be presenting a shootout between Oracle WebCenter and BEA Weblogic.

I'm sure that last one will be hyper political...

Its also good to see Eric Marcoux presenting on a comparison between Oracle Portal, Oracle WebCenter, and Stellent... its nice to see a Stellent presentation make the top 25. Although it will be tough to give a decent comparison of all 4 Oracle Portal Products PLUS Stellent in one talk... Good luck to Eric.

Your turn: are you going to Open World? If so, what would you like to see?

Enterprise 2.0: What It Is, and How You'll Fail

A few days ago Justin asked me if I would like to give a presentation at Oracle's Enterprise 2.0 Bootcamp on July 28th... I said sure, but only if I get to be controversial.

Hopefully this topic will be controversial enough!

UPDATE: the presentation went over quite well... and I've posted it online if you want to take a peek.

I've heard a lot of buzz about Enterprise 2.0 lately... and lots of Enterprise Content Management folks (including Newton, Pie, and Billy) still seem to be frustrated by the general lack of a coherent definition of what exactly is Enterprise 2.0? Frankly, I think the very act of defining Enterprise 2.0 defeats the whole purpose, but I appreciate that some people need guidance...

The world is filled with "thought leaders" trying to railroad people into a narrow definition that is properly aligned with their own technology and ego... Some say E 2.0 is just Web 2.0 for the enterprise. Some say its emerging enterprise architectures (SOA, ESB, CEP, IdM) that make services easier to govern and re-use. Others -- like the lame-os at Wikipedia -- say its nothing more than enterprise social software. Others say its just the Knowledge Management Beast rearing its ugly head yet again...

triple ish on that last one...

In my upcoming book, I spend a chapter on how Enterprise Content Management fits in with Enterprise 2.0... and after swimming in blogs for the past year I think I have synthesized an approximate definition that might make everybody happy:

Enterprise 2.0 is an emerging social and technical movement towards helping your business practices evolve. At its heart, its goals are to empower the right kind of change by connecting decision makers to information, to services and to people.

Swish! Leave a comment and tell me what you think... Hot or not?

Its vital to understand that E 2.0 is still a moving target... we know that the enterprise is changing radically, but we don't have enough hard data to say what its changing into. However, I feel its just the latest leap in the neverending goal to make information and services more re-usable.

As an added twist, E 2.0 also has at its core the goal of connecting people with each other in order to discover the tremendous value that exists outside "the process." If the purpose of process is efficiency, then why do so many people in enterprises complain that their process is horribly inefficient? It might be because your process just plain sucks, or it might be because the process is keeping you from changing something that has drastic side-effects outside your view of the company. The point is not to mock or destroy "the process," but to help processes evolve at the optimal rate. This is not possible unless all decision makers can see first-hand how their changes negatively affect other departments. This cannot be done with metrics alone: you need friendly hallway conversations between people who normally would hate each other.

Unfortunately, after coming up with that definition I was staring point blank at something a little unsettling... If people focus on the wrong things, Enterprise 2.0 will FAIL HORRIBLY the same way Knowledge Management FAILED HORRIBLY! For those who forget, Knowledge Management was some snake oil sold 20 years ago saying that access to information was the #1 problem... its not. The problem is access to the right information at the right time in the right format. My industry -- Enterprise Content Management -- emerged from the ashes of Knowledge Management, trying to implement the few good ideas that it offered.

So... how did Knowledge Management fail? What implications does this have for the failure of Enterprise 2.0? I'm no psychic, but I anticipate that people might make similar mistakes... it all boils down to one problem: you're probably focussing on the wrong thing! How could you fail? Here's five ways:

The Lotus Notes "But Bull Duck"

After yesterday's post on how the best analogy for software design is gene splicing, Michelle pointed me to perhaps one of the saddest marketing ideas ever... The Lotus Notes "But Bull Duck", brought to you by the kind folks at IBM.

I am NOT making this up.

For some reason that I can only attribute to a prolonged lack of fiber, the Lotus folks at IBM decided to put together an online ad campaign about how wonderful Lotus Notes 8 is... Hosted at createsimplicity.com, it allows you to design your own unholy cross-breeds of different animals, and save them to your desktop... each has a different name when you download it. My favorite? The "But Bull Duck".

Apparently the new motto is unify and simplify.

Ummmm... I'm all for unification and simplicity... and I do appreciate the idea that the best analogy for software design is gene splicing... but comparing Lotus Notes to the "But Bull Duck" might not be the best way to get new customers... nor is it the best way to demonstrate to your existing customer base that its time to upgrade. In all likelihood, the "But Bull Duck" will make them think your software is awkward and bizarre.

Viral marketing, or truth in advertising? You choose.

Software Design is Gene Splicing!

I came across a new article by ACM about Web Science, a new interdisciplinary approach to looking at how the web works. Its a great article about how we in software design have been thinking about the web all wrong, and need a more broad approach to solving its problems. Master Mark makes the comment that terms like "engineering" and "architecture" are completely misused when it comes to software. I never liked the terms myself... Mark suggests that instead we use the analogy of gene splicing.

That's the best idea I've heard in a long time... Maybe I should change my title to Chief Software Splicer? Maybe Chief Software Hybridizer? Chief Software Incubator? I'll noodle on it for a bit...

I have seen far too many projects go astray because people thought there was a "right" way to do software... probably because they bought into the concept of it being similar to designing a building, or slapping gears together. No No No! Software design is vastly different... we're more like mad scientists. Good software designers understand the potential for chaos, and use techniques to control it.

What we do has the vague appearance of science: we use algorithms and patterns that have worked in the past, we get to know limitations to physical hardware, we analyze common security and performance blunders, etc. However, every time we create new software, we are building a new Frankenstein's monster! It could be utterly mundane, or terrifying and ferocious. Likewise, integrating two existing systems isn't as simple as clicking together some Legos... its merging two completely different species into one single unholy unit. In other words, its creating a hybrid chimera like a eagle with a lion's head, or a fire-breathing manatee.

In short, its difficult to predict the end result of software unless you've done exactly the same thing in the past... but in practice, this almost never is the case. People rarely pay developers to do the ordinary... which is also part of the problem.

Sure, we can make reasonable guesses of what this new beast will be like... based on previous combinations, patterns that work, initial tests, and similar software development best practices. However, its still difficult to determine how it will behave in the wild until you unleash it. Perhaps software developers should take a page from the mad scientist handbook, and hone the following skills:

  • Keep close tabs on your new creations.
  • Observe closely how they interact with humans.
  • If they cause humans pain, then you must act swiftly:
    • either destroy your beautiful creation and start over, or
    • teach it to behave better.

Either way, new software will behave in erratic ways, and you always need a plan for what to do when it begins to behave badly... And many times you never know how it will behave until you unleash it upon the public...so keep the tranquilizer darts handy.

Data Means The End Of Theory??? Puh-lease...

Once in a blue moon I pick up a Wired magazine... then I usually am reminded why I so rarely read it...

This month, they came out with a terrible article about The End Of Theory, all about how the deluge of digital information will make the scientific method obsolete.

WHAT?!?!

It started out OK, with info about how Google was doing well not by making theories about trends, but instead by collecting massive amounts of data on behavior. True enough, and no complaints there... but Wired then extends this in bizarre directions, saying that this means an end to all scientific analysis: there are no more grand theories, its all just statistics now.

Further proof in the article? Quantum physics stopped trying to find out "why," and instead just focused on gathering tons of info on the "what." He also uses the "shotgunning" approach to DNA sequencing as the prime example of the end of theory. The whole thing was tons of useless "data" that didn't even come close to supporting his "theory" that data trumps theory.

How ironic... but what else would you expect from somebody with only a passing knowledge of science?

Firstly, every single example in the entire article is a false analogy. Either massive amounts of data were supporting existing scientific theory, or they were giving guidance where theories needed massive amounts of recent data. Is there a theory for what trends will be popular with 13-year olds? Sure, there are tons... but they are all based on the ability to quickly acquire recent data. The article claims that knowing the raw numbers is all you need... its a decent first approximation, but anybody with a passing knowledge of marketing knows that spotting trends are about two things: how many, and who? Google knows how many, but if you can determine if the "who" includes trendsetters, then the trend can turn into an epidemic.

The hard sciences -- like physics and biology -- also have well-established models that serve us well, which are pretty accurate even if based on old data. These models are great estimates in the absence of new data. That's the whole frigging point! Sure, you can tell which plane will crash by building 1,000,000 virtual models, and test flying them all... you'll sure get tons of data! But its a lot more cost effective to analyze data, make models, and test just 1 model at a time.

You should never be tempted to put data ahead of theory... do so, and I guarantee you will be destroyed by those who understand both. For example, there was a 10-year old article in the Atlantic Monthly warning about how the digital age will create an over-reliance on data instead of theory... one researcher demonstrated something like how over the past 50 years, the ups and downs on the S&P 500 nearly exactly mirrored milk production in Burma.

According to Wired, just watch milk production in Burma, and you'll be a billionaire! Of course, that advice is total crap... because next year cotton output in Egypt might be a better example. Or perhaps the length of Warren Buffet's fingernails is even better. If you just rely on data, your "model" changes too quickly to be useful... unless its based on a theory that depends on up-to-date data as an input, and can give guidance when you only have old or contradictory data.

Google makes the process faster, but ultimately changed nothing about the process itself. The discovery of useful knowledge still follows the scientific method:

  1. gather initial data
  2. make an initial hypothesis
  3. test the hypothesis with new data
  4. if the hypothesis is validated, it graduates to become a theory
  5. use the theory in lieu of up-to-data data, but
  6. continuously refine your theories with newer data, data in a different context, and data acquired with more accurate techniques

Seems to be what everybody is still doing... and apparently the editors of Wired were asleep during Science 101.

Oracle BEA Webcast Liveblog...

Oracle will be doing a webcast on the BEA acquisition in about 30 minutes... finally telling the market what their plans are for integration of the 2 middleware stacks.

As I said before, from an ECM perspective, we couldn't have been happier about the BEA acquisition. I always preferred the WebLogic stack, and the Plumtree/Aqualogic folks always had some cool technology (even if it didn't always conform to "standards").

I got a bit of an early preview from other Oracle ACEs, but now we'll be able to talk about it. I'll be updating this page every few minutes with my thoughts.

Recent comments