I recently came across the article We Don't Know How We Program. It was a discussion about the gaps between what developers and non-developers think about the process of writing code. It begins:
I was talking to a colleague from another part of the company a couple of weeks ago, and I mentioned the famous ten-to-one productivity variation between the best and worst programmers. He was surprised, so I sketched some graphs and added a few anecdotes. He then proposed a simple solution: "Obviously the programmers at the bottom end are using the wrong process, so send them on a course to teach them the right process." My immediate response, I freely admit, was to open and shut my mouth a couple of times while trying to think of response more diplomatic than "How could anyone be so dumb as to suggest that?"
hehehe... the central premise to the article is that programming is a creative endeavor, which doesn't lend itself well to process... The unfortunate developers subjected to process will only achieve mediocrity... additionally any process that stifles creativity will expunge or crush exceptional programmers, because they need creative space to be ten times as productive.
Does that mean that good programming cannot have a process? Of course not... although as others have noted, things like CMMi should be avoided like the plague. A process needs to be able to empower creativity, but also reign it in when necessary. Programmers -- like artists -- think big, and do wild things that are cool but don't satisfy the needs of the end users. The product doesn't sell, the users rebel, everything goes to hell... the developers know full well of the "failure," so to nurse their bruised egos, they blame the users for being dumb, or the specification for being incomplete. Then they curl up into a ball and call themselves misunderstood.
Yep. Just like Van Gogh.
To reign this in, you need a peer- and customer- driven process to help keep the project down-to-earth... however, done in such a way to not bruise egos or go anywhere near arbitrary rules. The process needs to evolve with the code. You also need something that encourages developers to think of the code as a community project, to reduce a sense of ownership, and thus keep egos intact. Agile focuses a lot on those kinds of processes... although Agile needs some tweaking for very large projects.
In addition, you also need processes that get the creative juices flowing... this doesn't mean brainstorming sessions or hyper expensive collaboration tools. This usually means simple things like physical proximity. Some teams even had great success with an enforced MESSY DESK policy. That's right... clean desks are evil! Messy desks and physical proximity encourage the "drop in, say hi, notice notes strewn about, and comment on them" process... which more than anything else inspires collaboration and fresh ideas.
My gut feeling? Unless you have artists designing your code process, your organization will never create exceptional code.
So keep a close eye on that process weenie with the stopwatch... he's clearly up to no good.
Its even more official... the Oracle purchase of BEA is final.
Most of my thoughts on the subject are in an older post from when Oracle announced their initial offer for BEA.
Its effect on Oracle ECM technology will be minimal... Oracle ECM already integrates quite well with a large number of BEA products, and this doesn't alter the overall ECM strategy much. The Stellent alumni are pleased as punch... Although the price list for Oracle Middleware just got a lot more complex.
Speaking of which, the effects of the BEA purchase on Oracle ECM sales should be very positive... since Oracle sells the best content management app available, and it integrates nicely with lots of BEA goodies, it should be a pretty easy sell to existing BEA customers.
Of course, the devil is in the details... so stay tuned.
UPDATE: Billy Cripe has some info about potential layoffs in Oracle Fusion Middleware. I'd like to link directly to the specific article about layoffs... but when I click on the permalink, it just takes me to Billy's LinkedIn page! Bad Omen?
UPDATE 2: Billy fixed the link...
Finally online... at Blip.tv instead of YouTube:
Its a really great description of what social software is (mostly a pile of failures) and where its going. Hopefully the 3rd or 4rd generation of social software learns from past mistakes, and helps do something with the insane cognitive surplus in the world. I would have just used the phrase "free time" instead of "cognitive surplus," but Shirky is an academic after all... and they love inventing new words.
My favorite quote was when he was chatting with a TV producer about Wikipedia, and how people were obsessing so much about the Pluto page when it was downgraded from a planet... She shook her head and said "where do people find the time?" Naturally, Clay Shirky snapped and said "people who work in TV don't get to ask that question!"
Its been two years since my inaugural blog post on April 29th, 2006: The Trouble With RSS. Over my site's second year, I wanted to do some long-term analysis on how different web analytics tools track hits, visits, and the like. As expected, they don't agree with each other:
- SiteMeter: 89,800 visits (132,000 hits)
- Google Analytics: 84,000 visits (140,000 hits)
- Webalizer: 431,000 visits (3,660,000 hits)
In contrast with the other two, Webalizer uses raw Apache logs to determine hit count, so it tracks every single dang hit... Over 3 million hits in one year??? That's clearly too many... I'm not that interesting... but the visit count might be more accurate. Webalizer is the only analytics tool that tracks folks who view my site with RSS Readers, which may hit my site several times per day... thus the higher visit count. The hit count is hyper inflated because it counts search engine spiders, spammers, and hack attempts (some better than others).
All told, if the majority of folks view my site with RSS, then Webalizer's count is more accurate. If most of them view it the old fashioned way, then the other two are more accurate. I'm probably in the 100,000 - 200,000 visits per year range.
Unfortunately, none of these numbers include the folks who read my site through an online RSS readers, like Google Reader, or Bloglines. These sites hit my RSS feed once, then share it with dozens of folks who subscribe to the feed... To get a better estimate, I could pipe my RSS Feed through something like Feedburner. Feedburner keeps track of how many subscribers you have on the online feed readers, and produces decent stats on it... however, once you move your feed to Feedburner, its almost impossible to move it out... so I'm not happy with that option. Even so, that still wouldn't track those who view my content through RSS aggregators like Central Standard Tech, or Orana, or other sites that run Planet.
Well, what about the data from Alexa? That site ranks web pages based on those who surf the web with a toolbar that tracks their every move. Personally, I think people who surf with that toolbar are opening up a major security hole... so their viewing audience is probably restricted to folks who are kind of tech savvy, but don't take security precautions. In other words, newbie geeks. I've never broken into the top 100,000 sites ranked on Alexa, but I frequently break the top 100,000 sites ranked by Technorati... although Technorati only ranks blogs.
UPDATE: As Phil noted in the comments below, most people use Alexa just to boost their own page rank. For example, you could have your web team install and enable the Alexa toolbar, but only when browsing you own web page. That would make your Alexa rank huge without any actual hits from the greater internet...
Even if we could accurately count how many people hit the site, we're still at a loss to know who paid attention. Google Analytics tries to measure "time on the page", other metrics include bounce rate, or even the number of comments.
Oh well... A reliable measure of relevance will always be elusive... but at least we have enough estimates to support a cottage industry of people analyzing those metrics to prove anything they are told to prove ;-).
Back to my anniversary... Lots of stuff has changed since my first anniversary post: I've traveled to South Africa, Brazil, and Argentina... I've remodeled my kitchen, I've nearly completed my second book on Oracle enterprise content management, I've given technology presentations at Oracle Open World, AIIM Minnesota, BarCamp Minnesota, and IOUG Collaborate in Denver. I've trained both salespeople and consultants on what Enterprise Content Management actually is, and I helped negotiate a settlement to an 18-month lawsuit against a local non-profit. Oh yeah... I implemented about a dozen ECM solutions as well...
Next year, I hope to have even more goin' on... and a few more web site visits.
When I first heard about Oracle taking a new direction with their old content management product -- meaning the old Content DB, not the newly acquired Stellent stuff -- the first thing I thought was it's about time!
When Oracle claimed it had 2 content management systems, that really confused people... especially considering that Content DB was at best a set of tools to create a content management system, whereas Stellent was a full blown application plus framework. They really weren't like each other at all.
Universal Online Archive (UOA) is Content DB, but now focused on being an archiving platform. On Oracle 11g, it is an extension on the Secure Files feature of the database. If you haven't heard of Secure Files yet, it beats the Linux filesystem on both read and write performance. It also has compression, de-duplication (only storing duplicate files once), and encryption. The encryption is an extension of Oracle Transparent Data Encryption, plus support for encrypting entire tablespaces instead of just individual columns. This means support for foreign keys, as well as indexes beyond the basic b-tree stuff...
Compression reduces the storage needs by 33% on average, according to Oracle. If you then use the statistics from IDC that there are 8 copies for every 1 content item, then de-duplication would bring to total storage down by 87.5%... all while maintaining better-than-filesystem performance, despite the added cost of encryption. See this whitepaper for some tuning statistics and tips.
Secure Files is the next generation of Large Objects for the database... and it's very cool... but what should you run on top of it? For the longest time, the folks at Stellent balked at using the database for file storage. Using the filesystem made much more sense because of performance reasons, which made up for the additional complexity of the architecture. However, if the user has 11g, there really is no better option than storing content items in the database.
NOTE: This rule-of-thumb does not apply for web content -- especially for small images and thumbnails. In those cases, a split approach where public web assets are stored locally would probably be faster. Luckily, a customized FileStoreProvider can help you achieve this.
Also, Oracle Universal Online Archive finally fits in with Oracle's broader strategy for content management. Even though it can store anything, the first release will have connectors to email servers to be a mail archive:
- Microsoft Exchange
- Lotus Notes
- Generic SMTP Server
This fits right in with the Universal Records Management strategy, which is to embed a Records Management Agent in remote repositories, and control their life cycle from the Records Management system.
In other words, your email archiving policy is no longer dictated by IT. Your records managers can say when an item should be archived, and how long it should be retained based on events, instead of simply time and size constraints. For example, emails should be retained 2 years after a project completion, 6 months after employee termination, or 12 months after you lose a specific customer. That will reduce both your email space requirements, and your legal risk.
But it doesn't stop there... the next step is to make connectors to other content management systems, for example, Sharepoint. The idea is to archive content out of systems like Sharepoint, and replace them with a "stub". When a user downloads from Sharepoint, the "stub" is smart enough to redirect the download to the archive, and return it directly.
In other words, you could be using a secure, compressed, de-duplicated, encrypted, archive without ever noticing. Throw in a Records Management Agent, and you'll also invisibly comply with dozens of regulation and laws... no matter where you store your information.
Its a good strategy, and some interesting technology... we'll see how it pans out.
UPDATE: The release was announced, but they don't have a date for when it will be available for download. Here's some more info about the release, and some places to watch for downloads:
Michelle won the cookoff to see who had the coolest ECM implementation... woot! The prize was one "silver" ladle, and a $100 gift certificate. Besides Folios, annotations, and the new Site Studio contributor, she showed off Kyle's PicLens integration with Stellent's RSS Feeds, which went over quite well... nice and flashy! The roadmap and ECM focus groups were good as well... although in the future I'd do the cookoff first, then the roadmap, and lastly the focus group. That way, people have their feature lists and questions fresh in their mind.
As usual, a conference this large left me feeling like I missed out on a lot. I networked with a lot of people, and discussed ECM a lot... but I wanted to learn more about identity management, performance tuning, and Hyperion. There were simply too many options, and the handful of non-ECM talks I attended were a tad too high-level for my taste. Maybe I'm too technical, but I don't feel like I learned that much.
Brian Dirking wanted some feedback, so I guess I'd make the following suggestions:
- After people register (and pay) for Collaborate 09, give them access to the presentations from 08. Then we'd better be able to determine who is a good presenter, what topics are too technical, or which ones aren't technical enough.
- Have some level of continuity between years... I've given the "50 ways to integrate with the content server" talk about 4 times, but its always a bit different, and people continue to be surprised at how flexible Stellent is.
- Have some kind of easy trends analysis to help people find "what's hot" in their industry. Ideally this would be community based, to avoid sales pitches and promotions. For example, send out a survey to ask people what their industry is, and what topics they are interested in... perhaps even which technologies or presentations that they might find useful.
I'm used to more focused conferences, like the O'Reilly ones... so this many high-level presentations makes me sad. I personally would like a bit of community feedback to help everybody find which topics are most relevant to their background, goals, and needs.
Not an easy undertaking... but I'd wager a lot of conferences would appreciate something similar.
I gave my presentation on 50 Ways to Integrate with Oracle Content Management today... it was similar to my one from Crescendo last year, but I updated it a bit with some of Oracle's new connectors (BI Publisher, Secure Enterprise Search, Records Management Agents, etc.).
After that, I had a book signing. On my way over, I realized that I didn't tell anybody I was doing a book signing.... so attendance was kind of thin. Plus I was late. Chaffee showed up with Patrick and Rhonda, and I signed his book with something characteristically glib...
I had lunch with some customers -- finally attempting that business networking thing -- and promised to help a few folks out with their architecture.
In the afternoon, I helped out on Michelle's two hour hands-on lab about Site Studio: Building an Enterprise Web Site From Scratch. Believe it or not, if you know what you're doing, you can get a pretty good handle on an enterprise scalable web site in a few hours with Site Studio... Then it was dinner with some Stellent folks, and drinks while we watched the Wild lose.
Since I'm now done with my official obligations, I'll be spending day three going to sessions and networking...
I suppose I should start with day zero, and not day one...
Michelle and I landed, but the hotel didn't have our reservations on file. Great... and on the one day we decided to not print out the confirmation letter. Michelle scoured her web-email using the computers behind the reservation desk... in the meantime a few Oracle employees came in and were initially confused as to why she was working behind the counter... Anyway, the clerk looked through their list of who was checking in that day, just to see if our names were spelled incorrectly.
We were there of course: as Brian and Michelle Hugg. Lovely. Yeah. We'll live that down.
Later I had drinks with some folks I hadn't seen in a while (like Dan Norris and Matt Topper), as well as folks I heard of but never met (like Jake Kuramoto and Paul Pedrazzi). The Oracle ACE Director dinner was good. I love finding out what other ACEs are up to, and what technologies they are interested in. The buzz these days seems to be all about Hyperion... just when I started learning about BI Publisher and Real-Time-Decisions!
Keeping up on enterprise technology is a constant struggle...
The first day of IOUG Collaborate 2008 was pretty good... I hung out at the Enterprise Content Management conference-withing-a-conference a lot to chat with other ECM folks. I gave a well-recieved talk about why ECM projects fail, which was essentially an extension of the AIIM list from last year. It wasn't just a rant, it had some practical advice of what typically goes wrong, and what you can do about it. Cliff Cate and Tom Tonkin presented their war stories and advice as well.
Here's a tip: very few enterprise software failures have much to do with bad software... its almost always poor communication.
I wasn't able to attend many sessions after that... not the exhibit hall, not even the keynotes. I did check out the hands-on lab about Oracle Text, hoping for a deep dive... but it was pretty basic. Attending a conference is more fun when you're not a presenter. I had to go to my hotel early to put the finishing touches on my Tuesday presentation... so I skipped all the festivities.
I have another session on day 2, after which I'll be able to relax, attend more sessions, and network more.
So, I caught wind of the release of the Google App Engine late last night... which is a web development framework that allows you to run your entire application inside Google's infrastructure!
This is huge... its like saying, "why run your web site on some random hosting company, when you can run it inside frigging Google?" Google manages your uptime, backups, and allows it to scale to Google-sized proportions. Its cloud computing to the max. Not only does it do virtualization of your data storage (like Amazon S3 and SimpleDB), but you can also host your application itself in Google's environment! Your code is virtualized across hundreds of servers. If any one of them crashes, who cares? Your app will keep chugging along.
I got on their waiting list as soon as their site was available at 11pm. A half hour later, I was greeted with a 'welcome' email from Google, but by that time I was a tad sleepy to check it out... I'm lucky, this was a preview release for only the first 10,000 folks. Register now: there still might be time.
Of course, there are a few gotchas:
- It only supports apps written in Python. No Java, no C, no .NET. Although, Python rocks...
- The best web application framework option is Django... which is an awesomely elegant framework, similar in philosophy to Ruby On Rails. Existing Django apps can be ported in minutes.
- You cannot write to the file system, you have to use the Google Datastore API
- If your web request takes more than a few seconds to respond, Google will kill the process, and send back an error... so I don't know how they do batch processes...
- Google owns your ass even more.
I'm happy about this... I think its a huge validation of the direction Oracle is going with their Coherence application & storage virtualization engine (which does work with Java ;). It's also some nice competition to the Amazon S3 and SimpleDB services... not to mention a huge validation of the Python language and Django framework.
I also let out a hearty guffaw at those who mocked me for my insistence that Python and Django was the superior framework... Google will certainly be ramping this up soon, and it certainly will be reasonably priced. If you're starting from zero, I can't think of a better way to go than Python and Django. Forget Ruby, forget Rails, forget PHP, forget .NET, forget Java. Enterprise companies who want control over their data, and already have a large middleware investment should use Oracle Coherence or something similar... and use the web framework just for the front-end.
Unsure what the implications are for SOAs...
The people hardest hit by this will those dedicated to the LAMP stack at cheap web hosting companies. In other words, those folks who set up a Linux, Apache, MySql, and PHP environment, and try to keep the dang thing running... which is a ton of effort, and difficult to scale. Small companies want uptime and scalability as much as the big boys, and virtualization (aka cloud computing) is the way of the future.
Middleware that cannot be easily virtualized will die on the vine...
On April 1st, Google announced that their Google Docs application now works offline.
This is kind of the direction that people have been taking for a while... being able to use Rich Internet Application technology like Adobe AIR to work on web forms, but take them offline for later viewing. However, Google decided to take an oddly different approach.
Its like AJAX on crack. And if done right, it could break down even more walled gardens than Web 2.0 did.
Currently, Google Gears is only in its 0.2 release: very very very beta. Not like GMail beta, or Google Docs beta... but so beta that maybe they should call it alpha or something. What I found interesting was the possible effect this strategy will have on the rest of Google's applications. Take Spreadsheets offline? How about my Analytics data? Why not GMail? The process would be this:
- Connect to your Google online app.
- Use Gears to synchronize your local database with Google data.
- Take your application offline.
- Run everything you need by connecting to the Gears web server, and getting back chunks of HTML/XML.
Now... What happens when you add Greasemonkey to the mix?
Don't like how GMail organizes its back-end data store? Well, too bad, you can't use Greasemonkey to force GMail to store or retrieve your data differently... that is, unless Gmail uses Gears!
If so, I could inject custom code to not only synchronize with my online database, but store it however I want. Previously, Greasemonkey could only access existing content -- provided it was available through AJAX or Remote Scripting. But when combined with Gears, Greasemonkey scripts can perform radical analysis of web content, and store the processed information locally! It can also synchronize back to the main site, for proper online storage...
In effect, Greasemonkey allows end users to inject customized code for web page display... but Greasemonkey plus Gears allows you to inject a whole custom web application! So what??? Well, imagine being able to do this:
- Use GMail to store up all the email questions and answers on a community group. Use Greasemonkey to keep a running count of who helps answers questions (gurus), versus who just demands answers (leeches)... then avoid helping the leeches.
- Use a Greasemonkey script to run custom reports based on Google Analytics data, and present it right in the browser.
- Create an offline Google Spreadsheet with Gears. Then, go to any one of the popular online polling apps (Surveymonkey) or web form designers (Wufoo). Use a Greasemonkey script to access the raw data from the reports, process it, and inject it into a Google Spreadsheet. Sync the offline spreadsheet with Google. and now the report is online for all to see!
- Transfer information from one site -- say Facebook -- into any other site -- say LinkedIn -- without having to use their proprietary APIs, or let the sites know the password for the other site! Just use a Greasemonkey spider to grab the information, store it locally, and upload it when appropriate.
Will it bring about the next gen of the web? Web 3.0? Web 4.5? Maybe web candle plus monkey? We'll see what happens in Gears 0.3...
UPDATE: Jake had the suggestion that it might be more useful to use Mozilla Prism with Greasemonkey, as opposed to Google Gears. Lifehacker recently profiled Prism. That depends on how this plays out... Prism would work great for Firefox-based rich internet apps... whereas Adobe AIR and Google Gears would be more cross-platform. If you want iPhone support, you'll need Safari. Although at present Prism is more feature complete than Gears.
Overall, I think Google Gears is going in a better direction than AIR or Prism, because they are following the maxim don't break the web!... but time will tell if they can actually deliver.
An interesting new book by Bill Price -- the former VP of Customer Service at Amazon.com -- as interviewed by Guy Kawasaki:
Customers don't want to call their bank or email their online retailer if something's confusing or if there's an error--instead, everything should work perfectly in the first place. A recent survey cited 75% of CEOs proclaiming that their companies provide above average customer service, yet almost 60% of customers said that they were "somewhat to extremely dissatisfied" with their most recent customer service experience.
Almost a tautology... if everything worked perfectly we wouldn't need customer service... therefore the best option is to never have it... erm... hookay.
Seriously tho, he has a point. If the goal is amazing customer satisfaction, then all departments need to work together to achieve it. From the developer's perspective, we knew very few people read the documentation or run proof-of-concepts, so support calls were inevitable. Unfortunately, we saw this as inevitable, and became cynical...
Customer: My software doesn't work right after I patched it! Developer: Did you read the 'readme.txt' for the patch? Its a whole whopping 3 pages long. Customer: No... Developer: Call support.
In retrospect, I now realize that all it would take is a tiny adjustment to massively improve the customer experience: make documentation that is enjoyable to read, or make it brain dead easy to whip out a test box or a proof-of-concept. Naturally, doing either of those had their own internal political implications... so its needs to be a goal that everybody agrees to. Development, documentation, support, consulting, marketing, and sales.
When you think you might be off track, just ask yourself this question: How does this help our customers kick ass? That should set you right again... (Hat tip: Kathy Sierra)
Most companies actually haven't done the math to deliver Best Service because Best Service is always cheaper--or they do the wrong math. It's not just "cost of making bad or confusing product compared to a good product versus associated cost of service." ... Mobile phone companies don't even want you to know what you are really paying and invented new math: "$200 free calls on your $50 a month plan", but it's much more complex even than that when you read the small print. On the other hand, MCI in the old days, and Telstra today, analyze call pattern and then call their customers to recommend a LOWER-rate plan. That's we like: being proactive, a core part of Best Service.
*pffft!* *cough!* Excuse me while I wipe the tea off my monitor...
Holy crap, a cell phone company that helps their customers spend less on their calling plan? At first, this sounds crazy... Like any company that followed it would lose margins and go out of business. But would they? These days cell phone companies are trying desperately to retain customers. A tiny bit of goodwill like this can go a long way towards brand loyalty. Save them $5 per month, and they'll probably stick around for another year.
Similarly, when Amazon is unable to deliver a product when it originally promised, it sends out an "I'm Sorry" email, allowing the customer to cancel their order. They suggest that if the person absolutely needs it right away, they should cancel the order, and buy from someone else. Very few people cancel... but they all became more loyal customers.
Naturally, this book is better for business-to-customer interactions, and probably less for business-to-business... but a compelling read.
This could be pretty serious... According to the Associated Press and Wired Magazine:
IBM Corp. has been temporarily banned from new federal contracts as prosecutors examine interactions between employees of the company and the Environmental Protection Agency. The suspension went into effect last Thursday "while the agency reviews concerns raised about potential activities involving an EPA procurement,"
When one federal agency imposes a ban, all others usually follow suit... the EPA claims it will finish its investigation within 30 days, but this investigation might take a full year... which means IBM could lose out on $1.3 billion worth of government contracts this year. The reasons for the ban are unknown... the EPA hasn't made their case public. However, if you read between the lines it looks like the Fed thinks IBM used bribery to win contracts.
Either they noticed some oddities about money changing hands... or the Feds believed that IBM sucks so dang much that only bribery could explain why they get any business at all... ;-)
If you're an IBM competitor for government contracts, let the FUD fly!
UPDATE: Some more links from Slashdot:
- Official IBM page at the Federal Excluded Parties List System
- Federal News Radio interview with EPA
- biz.yahoo.com's take on IBM
UPDATE 2: Some people have asked if this is an April Fool's Day prank. I think the odds are low on that one... first of all, this was reported on yesterday by many business journals. IBM's stock dropped to 113 in after-hours trading yesterday, and today is back up to 116. If this is a hoax, the SEC will come down hard on the person who started the rumor... it smacks a bit of stock market manipulation: more so than the 2003 April Fool's rumor of the assassination of Bill Gates.
UPDATE 3: Tony Byrne at CMS Watch heard that the reason for the lawsuit is that overly aggressive IBM sales people threatened the EPA with legal action after having lost a contract bid.
I don't know what IBM did, but it seems like EPA thought Big Blue really crossed a line in their appeal of a failed contract bid. Federal contracting -- like so many things in Washington -- is a bare-knuckles sport. Threats of appeals and possible litigation by losing bidders can keep federal contracts officers awake at night. In this case, it appears EPA struck back... I also find big vendors more likely to threaten "up the chain" -- all the way to C levels if necessary -- to appeal a lost bid or to suggest that a particular problem wasn't theirs, but rather stemmed from the customer's low-level employees failing to follow the vendor's prescribed best practices.
Yikes... whining, crying, and threatening customers just because your product can't sell on its own merits? Seems like a pretty slimeball move on the part of IBM... allegedly. If this is true, and IBM lawyers force IBM salesfolks to use fewer cry-baby tactics, I wonder how their revenue would suffer?
Adobe has recently announced that they will release a free version of Adobe Photoshop as a web service.
Its written with the ultra-sexy Adobe AIR technology -- a Flash-based rich internet application -- and comes with 2GB of storage on their back-end servers. It's a slimmed down version (naturally), but has more than enough features for the average user.
I'm not sure if it will help them up-sell the rest of the Photoshop stack... but their brand name will help them take on some of the other online photo editing sites (Flickr, Photobucket, etc.). Integrations with other social networking sites are a big plus as well.
A twitter marriage proposal. Twitter is the king of the Microblogging sites -- similar to text messaging -- that are an interesting cross between narcissism and chaos.
"We've been known to twitter to each other when we're in the same room," said Sullivan in an e-mail interview. "Sometimes one of our friends will tweet something like, 'Can you guys just talk out loud? Aren't you in the same house?' But since so much of our time is actually spent apart, we've had to find lots of alternative ways to communicate. Twitter has been a wonderful part of that matrix."
Rewis proposed to Sullivan via Twitter shortly before midnight March 2: "@stefsull - ok. for the rest of the twitter-universe (and this is a first, folks) - WILL YOU MARRY ME?" Sullivan's reply: "@garazi - OMG - Ummmmm... I guess in front of the whole twitter-verse I'll say -- I'd be happy to spend the rest of my geek life with you."
How woefully unromantic... I can see the future unfold as a script in my mind:
Daughter: Mommy, how did daddy propose to you? Mother: Over the internet... Daughter: Wow... a whole custom website? How romantic! What did it look like? Can I see it? Mother: No... not a custom site... Daughter: A blog post then? Mother: Think smaller... Daughter: Email? Mother: Keep going... Daughter: Chatroom? Mother: Even smaller... Daughter: I give up. Mother: A Twitter Tweet. Daughter: ... (long pause) ... why didn't he just phone it in?
I didn't say it... Jeff Atwood from Coding Horror said it:
I've seen too many people get so wrapped up in what other people think of them that they can't bear to have an original opinion about anything. But if you accept the premise that this kind of statement won't change anyone's mind, and is ultimately ineffective-- even counterproductive-- what are we left with? What purpose does the statement "stigma of being a Windows developer" serve? I can only think of one: David gets off on putting other people down. And that makes him kind of a douchebag. Which also means when you're using Rails and OS X, you're using the platform of choice for douchebags.
A tad harsh... but similar to Zed's comment that Rails is a Ghetto, people are slowly demonstrating that the man behind Rails might be a bit of a demagogue... an insecure one at that, who apparently gets a kick out of putting people down.
Whatever. Rails will always be inferior to Django... even if Rails gets the buzz.
I personally do not have a platform of choice... I have Ubuntu, two Macs, and a Dell running XP. I use 4 text editors, sometimes all in the same day. This week alone I wrote code in 5 languages... and I doubt I would ever be happy if I was forced to select one and ignore the rest.
I know this goes against everything that the "pragmatic programmers" endorse, but I think they miss the point: using Rails is easy... but, if doing the right thing was easy, then everybody would already be doing it. I agree more with Joel Spolsky, when he says that we programmers don't get paid to solve easy problems... we get paid to solve difficult problems.
To me, that means the more languages, the more systems, the more frameworks, the more methodologies you know, the better able you are to adapt. There is no "right way" to do software. There are only the ways that, in the past, have helped us avoid failure.
Accept it, and move on...
Lots of people these days are nervous of embarrassing online profiles on Facebook... I've blogged about this before (Take My Privacy, Please!), and mentioned that any company would be idiotic to implement such a "don't hire" policy.
Why? Any company that fires kids for acting like kids is a horrible place to work... and not only that, but they would be totally shooting themselves in the foot. Companies need these kid who are talented with social software, searching, and sharing... otherwise, they will suffer greatly in the upcoming global talent shortage.
60% of new jobs in the 21st century require skills that are currently possessed by 20% of the workforce
That quote is thanks to a YouTube video that Billy Cripe found about hiring trends.
Ever notice how difficult it is to find cheap, competent programmers in India these days? The good ones know their value, and are charging 60% - 80% of what their counterparts in California make. For those prices, you're better off outsourcing to Indiana or Idaho. If you must use global workers, check out Brazil, or Bulgaria... but their cheap rates will only last for a short period of time.
When even China is having labor shortages, something very disruptive is happening... If you want to stay ahead of the curve, embrace new ways to empower your talent, and for the love of god have a strategy to retain talent before its too late.
The perfect process is a myth... things will inevitably go wrong. Thus, measuring the success of an automated business process is much more than just return on investment... more importantly, success should be measured on how the system can adapt when things fall apart.
Infovark had an interesting story about his experience with an insurance company... they started billing him too much, causing overdraft fees at his bank... and it took them 5 months to get the problem sorted out. Why? Because their business process treated every action as an atomic unit, an nobody looked at the whole...
If the guy in billing could just talk to the guy in customer service, things would be great... but there were 4 separate employees in this process, along with delays, feedback loops, and angry letters... When you have a process with lots of moving parts, bad feedback loops are inevitable. Small delays turn into huge delays, and if nobody has an incentive for full process completion, a fiasco is inevitable.
There's a lot of passion in my industry about breaking down business processes into re-usable "services"... this can mean everything from SOA, to just training an employee to correctly respond to email requests. However, too many people treat services and people like replaceable cogs. This is bad, bad, bad... because it gives everyone tunnel vision.
Treating people like cogs gives them a strong incentive to not care about your business. What if something goes wrong? Well, the cog did its job correctly: it produced the proper outputs given the inputs it had... therefore, why should it care? Let's say 10% of the time, the process fails... a cog is better off working harder on the 90% to get its numbers up, and ignore the rest... Why should cogs care? Suggesting changes just ruffles feathers, and it not like they have the power to change anything outside their domain...
Atomic services need competent owners... that is obvious and commonly implemented. The mystery is why whole business processes -- which are ultimately service orchestrations -- so rarely have owners. It's got to be somebody's responsibility to step in and fix things when anything goes wrong. Hopefully, that somebody has a "bag of tricks" that helps them motivate people, and modify software. It should be somebody who can communicate, is likable, and relatively technical... and naturally somebody with the authority and ability to get things done. And the best people at doing this? The cogs. They know where the pain points are.
Without a whole-process owner, there's no incentive in the system to produce actual results.
I thought such a statement would be flipping obvious... but there's plenty of big giant companies out there who need a reminder.
Well, that was unexpected...
A few minutes ago, Rupert Murdoch made a bid for Yahoo, to counter the offer made by Microsoft... I covered this last week, as did everybody else... This will, no doubt, make Steve Ballmer throw a chair or two.
Fox News or Microsoft? Huh... I'm torn over which one would be worse...
Jake was chatting about the Microsoft/Yahoo merger, describing how little sense it made. I agreed mostly, until I remembered something Steve Ballmer said about Google in 2006. This was when Google was poaching Microsoft talent:
ummmmm.... what!? I was totally confused... because Microsoft most certainly did NOT own enterprise search. Neither did Google. Nobody did. FAST and Autonomy/Verity had some claim... Oracle had a solution or two... but in reality nobody in the world had a product that satisfactorily solved this problem.
How do I know? Well, I was a content management developer, helping create the most flexible product in that market. If anybody -- and I mean anybody -- had a halfway decent search engine for the whole enterprise, I would know. We evaluated dozens, and all fell short for multiple, multiple, so many reasons. Usually flexibility, security, insufficient context sensitivity, encoding bugs, performance... or perhaps I'm just being hyper-critical. Nevertheless, we integrated with several of them, and made work-arounds for their limitations.
Anyway... considering Microsoft's recent acquisition of FAST, and the purchase of Yahoo, this means one thing to me: Microsoft is finally getting serious about owning enterprise search. With Yahoo, Microsoft also gets Omnifind: a "free" product made by IBM that's specifically designed for enterprise search. Unless this is a desperate move, that's the only sane excuse for spending so many billions of dollars for Yahoo. They're trying to do for enterprise content what Google did for web content... That could easily be a $10 billion market.
Of course, I'm not particularly impressed that they will actually succeed. And even if they did, they would be "forced" to have an open API that allows easy integration with non-Microsoft products... otherwise, they wouldn't really have an enterprise search product, would they?
Either way, by 2010 they might have something interesting to show off...
Are you a web geek in Minneapolis? Looking to help out a local charity, and prove that Rails kicks Django's ass, or vice versa? Then you should check out The F1 Overnight Website Challenge.
Its a 24-hour all night race to the finish... you get one hour to meet with a deserving local charity, then 23 hours straight to get their web site up and running. There will even be a break in the middle for a Nintendo Wii challenge... and the team who wins gets an extra hour to finish their site!
So, call up a few buds, and register your team! Trust me: you'll want to have a team that has worked together in the past on stuff like this... The event will be March 1st, 2008.
Plenty of time to brush up on your skills, and exercise your Wii hand ;-)