Tips on how to better communicate. Some of these topics are general in nature, but most are geared towards helping technology people better communicate with each other. I'm a big believer that most software failures are communication failures; and it's everybody's responsibility to know how to communicate.
PowerPoint is a necessary evil... everybody is expected to give presentations in it, but few people are good at it. They cram too much information into one slide, and pack them full of data that might better go in a report. Presentations work best when used to persuade, it's an awkward tool when you try to educate. There's a reason PowerPoint was banned by the Pentagon:
"PowerPoint is dangerous because it can create the illusion of understanding and the illusion of control" -- Brig. Gen. H. R. McMaster
But alas... we're still stuck with PowerPoint... so we should probably make the best of it!
One of the ways to make PowerPoint presentations more compelling is to tell a story... unfortunately, most people are pretty bad at telling stories as well. There's an entire industry created around corporate storytelling that trains people how to engage your audience with a full-fledged story... but there's an even simpler approach. The creators of South Park stumbled on a formula that they still use to assemble stories:
These same rules can apply to making a PowerPoint presentation flow like a story.
You initially assemble your main points -- which is usually the hard part. Then, when assembling your points to tell a story, try to transition between your points with the word "therefore," or the word "but." Like so:
- Slide 1
- Slide 2
- Slide 3
- Slide 4
Simple, no? You'll be surprised how much better your presentations will "flow" from one point to the next with this method.
Naturally, not all presentations can fit into this pattern... for example, "Top 10" presentations flow numerically from one point to another... so if people doze off they can pick up the next chunk at the start. Also, there may be times where the dreaded "and then" transition is needed, such as when a point needs to be communicated over several slides.
Nevertheless, if you try hard to use better transitions, your story will be more compelling, and PowerPoint will be one notch less evil.
I've used dozens of collaboration systems... none of which really stood out to me. It wasn't that they were difficult to use, it's that none of them actually solved the human problems that limit our ability -- and our desire -- to actually collaborate.
It wasn't until recently that I came across a talk from Clay Shirky, which explained pretty well what was missing... Clay spoke about human nature and software and asked a very important question: why do some kinds of sharing work well, while other fail?
Well... one reason is that according to anthropologists, there really is no one thing called "sharing." We humans -- like all primates -- have three distinct ways that we share... and our brains are wired to do different things based on what kinds of sharing we are doing.
For example... I want you to imagine that a little old lady is walking up to you on the street. She makes direct eye contact, and gestures that she has a question for you. I want you to take a deep breath and genuinely imagine that she asks you one of the following three things... and take note of your emotions:
- she asks you for money,
- she asks you to help her cross the street,
- she asks you for directions to the bus stop
If you are like most primates, your initial gut reaction to #1 is something like "NO! MINE!" Your gut reaction to #2 is "eh... OK..." And your gut reaction to #3 is "Absolutely! I'd be happy to!"
Why??? All three are sharing, aren't they? Not quite... millions of years of evolution have wired us to react differently to different kinds of sharing. The examples above each demonstrate one kind of sharing:
- Sharing Goods: the gut reaction it to feel bad when you give somebody else your goods... because then you can't use them anymore, and you might not be able to replenish them. Even generous people have this initial reaction.
- Sharing Services: people are more generous with favors, because they don't lose anything physical... merely their time. However, before sharing your time, everybody does a little mental math. Do I have the time? Is this worth my time, or should I delegate to somebody else? Shouldn't I be compensated for my time?
- Sharing Information: people are most generous when it comes to sharing information... it takes little measurable time, it costs nothing, and sharing information makes us feel good. We feel good, because we feel like we've helped out one like us, and made the world a better, more knowledgeable place.
Clay used the example of Napster to illustrate his point... it took a goods sharing problem (can I have your CD?) and a service sharing problem (can you make me a mix tape?) and turned it into an information sharing problem (can I download all your already ripped albums?). People were sharing their albums online because it made them feel good.
Like monkeys with iPods...
The problem with most collaboration software is that collaboration software relies too much on "service sharing" to get people to take action. I post some information on a place for "sharing" and to make it better through input from others... but in order for that to happen, first you need to read it and understand it. That's sometimes not a big deal, but in many cases it's a significant time investment.
To make matters worse, some of these systems even make it difficult for you to do the mental math for you to determine whether reading my document is worth your time... Is this for an important project? How important? Do you need my expertise for all of this, or just a few pages? Should I be charging your department for my time? Not only is this still a "service sharing" problem, but a pretty tough one at that...
Ideally, a good collaboration system would obey the 2 minute rule. Getting information is still something of a service... but if it's a service that can be performed in under 2 minutes, it will probably "feel" more like an information problem... which makes it more likely to be done. If it takes more than 2 minutes, then it feels like a service problem, and then we're back to the mental math problem...
Getting down to the 2 minute rule is tricky... you could opt for a system like Aardvark, which tries to match simple questions with the right person to answer it... Alternatively, you could force people to jump through a few hoops first before asking a question; essentially making it easy for people to answer your question. If people can estimate the difficulty of the task and the value provided by the solution, then it's easier for them to do the mental math for the tougher problems.
Neither of these are new concepts... in fact bug tracking systems for successful open source projects use a blend of both. They'd have to, or their entire model would collapse! Although I have yet to see any enterprise level collaboration system truly adopt these concepts... probably because the enterprise is something of a captive audience. If you're lucky youll have a system that focuses on ease-of-use and good training... but adapting to human behavior isn't always high on the list. Would people still use your collaboration system if you didn't pay them? Probably not... which usually means a problem...
Hopefully the big push to "Enterprise 2.0" solutions will get more software companies thinking about making software that's a natural extension of human behavior... Maybe in a few years we'll have Aardvark for the enterprise... but I'll take my standard curmudgeony "wait and see" attitude ;-)
Jake says yes, I say no... Primarily because we disagree about what a product manager actually does...
First, I think I should answer the question, what the heck does a Product Manager do all day long??? Most of the time when my wife tells people she's a "proDUCT manager" they think she's a "proJECT manager"... which isn't even close. I've met quite a number of Product Managers, in different industries, and I can safely say that most of them do significantly different tasks... And many of them disagree on what their main focus should be.
Why such contention? The best explanation I ever heard was this: a product manager is more or less the CEO of a product line. Which means that pretty much anything that will help your product, you should do... which means a million different things in a million different situations. In order to be successful, you need to know a little bit of everything -- what features to add, what new markets to attack, what sales people need to sell -- in order to be an effective Product Manager.
Rich Manalang did not like that definition, saying it was too broad... he preferred the idea that a Product Manager is responsible for just the "life cycle" of the product. In other words, vision, design, creation, testing, support, and end-of-life decisions. In my opinion, this definition is far too narrow... it's far too "developer centric" in that you focus mainly on new features, training, and support... but neglect such very important things like what effect does the existence of this new product have on the rest of the company? An individual Product Manager might not know this... but hopefully somebody on the Product Management team does! If not, then nobody is building the path between a successful product launch, and a successful company.
If the Product Manager isn't responsible for that critical task, then who the heck is???
Back to the original question... do technology product managers need to know how to code? I say, emphatically no... It's a useful skill to have, don't get me wrong... but I disagree that it's a requirement for everybody on your product management team.
Let's be clear -- programming skills are primarily useful to a Product Manager as a communication technique. A prototype speaks volumes about what features people want... but that's about the limits of it's usefulness to a Product Manager. And, of course, if you are a good communicator, you can certainly do without it.
"the three great virtues of a programmer: laziness, impatience, and hubris" -- Larry Wall, inventor of Perl
Now... what if you are a Product Manager in charge of lazy, lazy developers? This happens. Maybe you want a feature, but the developers don't want to add it. So the developers give you the run-around so they can go back to playing Halo. Well, in those cases it helps to know how to program, so you can call out your developers for being
lazy virtuous... but this only works if you know a great deal of the existing code base as well! Just because it works in a prototype, that doesn't mean it will work when integrated into the product.
"When it comes to understanding code, if you wrote it 6 months ago, it might as well have been written by somebody else" -- Ancient Geek Proverb
Knowing the code base is a pretty hefty requirement... even seasoned developers don't know everything about their product... so it would be nigh impossible for a Product Manager to do so. It's more important that their minions think they know the whole code base, to try to keep the
lazy virtuous developers honest. The best technical Product Managers know ho to "dive deep" into the product, and know well a handful of obscure but important details about the system... this inspires a healthy amount of fear.
Ultimately, Product Management is so important and so difficult, that it's almost impossible to find all of the skills you need in one person. Small companies do this, but as companies grow, they usually break it down into three teams... it's occasionally useful for the "technical" Product Manager to know how to code, but this rule does not apply to your whole team.
If you're looking for more info on this subject, I've heard great things about the Pragmatic Marketing Framework for designing a Product Management team.
Or is it?
Well, isn't this convenient... according to the Global Language Monitor, the phrase "Web 2.0" has become the one-millionth word in the English language... narrowly beating out "Noob," "Slumdog," and "Cloud Computing."
Firstly... yes, English does have more words than any other language. The British Empire kind of spread English everywhere... and unlike French and Spanish, English acts like a sponge, absorbing every word it can find! Taboo, Tatoo, Tortilla, you get the picture.
But... I call shenanigans. I think this thing was rigged to get maximum press coverage. "Web 2.0" is not a word, its a phrase. Also, it has been around for about 7 years now, and was hugely popular in the technology field for the past 5. It is a much more common phrase than "Cloud Computing." The word count folks claim that it needs to be mentioned 25,000 times before its an "official" word... But the New York Times alone mentioned it on 2,700 occasions! I'm sure a survey of other sites would demonstrate that this word hit the 25,000 sweet spot many years ago...
Others are likewise skeptical:
Part of what makes determining the number of words in a language so difficult is that there are so many root words and their variants, said Sarah Thomason, president of the Linguistic Society of America and a linguistics professor at the University of Michigan... Thomason called the million-word count a "sexy idea" that is "all hype and no substance."
I'll agree there...
Experts can be dangerous... not because they don't know what they are doing; but because you don't know when they don't know what they are doing. And if you are unable to notice this, then you will likely lose a lot of money...
Case in point, there was a recent neurobiology study on how the act of listening to "experts" actually makes your brain shut down!
In the study, Berns' team hooked 24 college students to brain scanners as they contemplated swapping a guaranteed payment for a chance at a higher lottery payout. Sometimes the students made the decision on their own. At other times they received written advice from Charles Noussair, an Emory University economist who advises the U.S. Federal Reserve... The advice was extremely conservative, often urging students to accept tiny guaranteed payouts rather than playing a lottery with great odds and a high payout. But students tended to follow his advice regardless of the situation, especially when it was bad. When thinking for themselves, students showed activity in their anterior cingulate cortex and dorsolateral prefrontal cortex — brain regions associated with making decisions and calculating probabilities. When given advice from Noussair, activity in those regions flat lined.
Woah... simply listening to "experts" makes your brain less able to calculate risks and make decisions... what's worse, the more counter-intuitive the advice, the less the brain functioned! This should be a wake-up-call to anybody who uses experts frequently...
To be clear, I use experts all the time... but I feel uneasy when I rely on experts. Yes, I understand electronics, auto repair, and accounting, but I still prefer to use outside experts because it saves me time. I never want to engage an outside expert on something I don't understand -- especially personal finance -- I prefer taking a crash course on it so I can easily spot those so-called "experts" who actually don't know what they are doing. Only after I gain that skill, do I feel comfortable listening to experts.
Well... isn't it a bit odd for me -- a software consultant -- to bash outsourcing? Not really... because I try hard to never approach projects with the attitude of an "expert." I prefer to approach it as an "educator." I try to help people understand the whole problem, the possible solutions, and potential risks. There is no "right way" to do software, there are only ways that in the past have helped us avoid failure... So my greatest skill is helping my clients avoid failure, but only with their knowledge and support will I be able to make them truly successful.
In contrast, an "expert" can only tell you what you want, and then give it to you... whether or not that is actually what you need.
I'm a power hater. I don't hate often, but when I do, I do it with gusto. So I have to say, this pile of vaporware called "The Semantic Web" is really starting to tick me off...
I'm not sure why, but recently it seems to be rearing its ugly head again in the information management industry, and wooing new potential victims (like Yahoo). I think its trying to ride the coattails of Web 2.0 -- particularly folksonomies and microformats. Nevertheless, I feel the need to expose it as the massive waste of time, energy, and brainpower that it is. People should stay focused on the very solvable problem of context, and thoroughly avoid the pipe dreams about semantics. Keep it simple, and you'll be much happier.
First, let's review what the "Semantic Web" is supposed to be... A semantic web is about a system that understands the meaning of web pages, and not merely the words on the page. Its about embedding information in your pages so computers can understand what things are, and how they are related. Such a beast would have tremendous value:
"I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize." -- Tim Berners-Lee, Director of the W3C, 1999
Gee. A future where human thought is irrelevant. How fun.
First, notice that this quote was from 1999. Its been ten years since Timmy complained that the semantic web was taking too long to materialize. So what has the W3C got to show for their decade of effort? A bunch of bloated XML formats that nobody uses... because we apparently needed more of those. By way of comparison, Timmy released the first web server on August 6, 1991... Within 3 years there were 4 public search engines, a solid web browser, and a million web pages. If there was actually any value in the "Semantic Web," why hasn't it emerged some time in the past 18 years?
I believe the problem is that Timmy is blinded by a vision and he can't let go... I hate to put it this way, but when compared against all other software pioneers, Timmy's kind of a one trick pony. He invented the HTTP protocol and the web server, and he continues to milk that for new awards every year... while never acknowledging the fact that the web's true turning point was when Marc Andreessen invented the Mosaic Web Browser. I'm positive Timmy's a lot smarter than I, but he seems stuck in a loop that his ego won't let him get out of.
The past 10,000 years of civilization has taught us the same things over and over: machines cannot replace people, they can only make people more productive by automating the mundane. Once machines become capable of solving the "hard problems," some wacky human goes off and finds even harder problems that machines can't solve alone... which then creates demand for humans to solve that next problem alone, or build a new kind of machine to do so.
Seriously... this is all just basic economics...
Computers can only do what they are told; they never "understand" anything. There will always be a noticeable gap between how a computer works, and how a human thinks. All software programs are based on symbol manipulation, which is a far cry from processing a semantically rich paragraph about the meaning of data. Well... isn't it possible to create a software program that uses symbol manipulation to "understand" semantics? Mathematicians, psychologists, and philosophers say "hell no..."
The Chinese Room thought experiment pretty clearly demonstrates that a symbol manipulation machine can never achieve true "human" intelligence. This is not to imply human brains are the only way to go... merely that if your goal is to mimic a human you're out of luck. Even worse, Gödel's Incompleteness Theorem proves that all systems of formal logic (mathematics, software, algorithms, etc.) are fundamentally error-prone. They sometimes cannot prove the truth of a true statement, and other times they prove the truth of false statements! Clearly, there are fundamental limits to what computers can do, one of which is to understand "meaning".
Therefore, even in theory, a true "semantic web" is impossible...
Well... who the hell cares about philosophical purity, anyway? There are many artificial intelligence experts working on the semantic web, and they rightly observe that the system doesn't have to be equivalent to human intelligence... As long as the system behaves like it has human intelligence, that's good enough. This is pretty much the Turing Test for artificial intelligence. If a human judge interacts with a machine, and the judge believes he is interacting with a real live human, then the machine has passed the test. This is what some call "weak" artificial intelligence.
Essentially, If it walks like a duck, and talks like a duck, then its a duck...
Fair enough... So, since we can't give birth to true AI, we'll get a jumble of smaller systems that together might behave like a real, live human. Or at least a duck. This means a lot of hardware, a lot of software, a lot of data entry, and a lot of maintenance. Ideally these systems would be little "agents" that search for knowledge on the web, and "learn" on their own... but there will always be a need for human intervention and sanity checks to make sure the "smart agents" are functioning properly.
That raises the question, how much human effort is involved in maintaining a system that behaves like a "weak" semantic web? Is the extra effort worth it when compared to a blend of simpler tools and manual processes?
Unfortunately, we don't have the data to answer this question. Nobody can say, because nobody has gotten even close to building a "weak" semantic web with much breadth... Timmy himself has said "This simple idea, however, remains largely unrealized" in 2006. Some people have seen success with highly specialized information management problems, that had strict vocabularies. However, I'd wager that they would have equivalent success with simpler tools like a controlled thesaurus, embedded metadata, a search engine, or pretty much any relational database in existence. That ain't rocket science, and each alternative is older than the web itself...
Now... to get the "weak semantic web" we'll need to scale up from one highly specialized problem to the entire internet... which yields a bewildering series of problems:
- Who gets to tag their web pages with metadata about what the page is "about"?
- What about SPAM? There's a damn good reason why search engines in the 90s began to ignore the "keywords" meta tag.
- Who will maintain the billions of data structures necessary to explain everything on the web?
- What about novices? Bad metadata and bad structures dilute the entire system, so each one of those billion formats will require years of negotiation between experts.
- Who gets to "kick out" bad metadata pages, to prevent pollution of the semantic web?
- What about vandals? I could get you de-ranked and de-listed if you fail to observe all ten billion rules.
- Who gets to absorb web pages to extract the knowledge?
- What about copyrights? Your "smart agent" could be a "derivative work," so some of the best content may remain hidden.
- Who gets to track behavior to validate the semantic model?
- What about privacy? If my clicks help you sell to others, I should be compensated.
- Will we require people to share analytical data so the semantic web can grow?
- What about incentives? Nobody using the web for commerce will share, unless there's a clear profit path.
I'm sorry... but you're fighting basic human nature if you expect all this to happen... my feeling is that for most "real world" problems, a "semantic web" is far from the most practical solution.
So, where does this leave us? We're not hopeless, we're just misguided. We need to come down a little, and be reasonable about what is and is not feasible. I'd prefer if people worked towards the much more reachable goal of context sensitivity. Just make systems that gather a little bit more information about a user's behavior, who they are, what they view, and how they organize it. This is just a blend of identity management, metadata management, context management, and web trend analysis. That ain't rocket science... And don't think for one second that you can replace humans with technology: instead, focus on making tools that allow humans to do their jobs better.
Of course, if the Semantic Web goes away, then I'll need to find something else to power hate. I'm open to suggestions...
In the early days of computer science, people discovered what was later to be called "Conway's Law":
Any organization that designs a system (defined more broadly here than just information systems) will inevitably produce a design whose structure is a copy of the organization's communication structure.
In other words, lets say you are designing a complex system -- an auto manufacturing plant, a new financial market, a hospital, the World Health Organization, or a large software solution -- the efficiency of the end result will always be limited by the efficiency of how the committee communicates. Lets say two segments of your system need to communicate with each other... however, the two designers of those systems were unable to communicate effectively with each other. The end result will invariably be a system where those two segments are unable to exchange important information properly. If I have to run an idea by my boss before handing it off to my peer in another department, then I'll almost always design a system that uses the same paths for sending important messages... whether or not its the optimal approach.
This helps explains why large companies love Enterprise Services Buses, but small companies think they are the spawn of the devil... neither is correct, however both opinions derive from the communication structure in their respective organizations.
This goes beyond the obvious communication problems between silos and corporate fiefdoms... even the physical components you design will inevitably mirror your ability (or inability) to communicate. From Wikipedia:
Consider a large system S that the government wants to build. The government hires company X to build system S. Say company X has three engineering groups, E1, E2, and E3 that participate in the project. Conway's law suggests that it is likely that the resultant system will consist of 3 major subsystems (S1, S2, S3), each built by one of the engineering groups. More importantly, the resultant interfaces between the subsystems (S1-S2, S1-S3, etc) will reflect the quality and nature of the real-world interpersonal communications between the respective engineering groups (E1-E2, E1-E3, etc).
Another example: Consider a two-person team of software engineers, A and B. Say A designs and codes a software class X. Later, the team discovers that class X needs some new features. If A adds the features, A is likely to simply expand X to include the new features. If B adds the new features, B may be afraid of breaking X, and so instead will create a new derived class X2 that inherits X's features, and puts the new features in X2. So, in this example, the final design is a reflection of who implemented the functionality.
How do you avoid becoming a similar statistic? Simple: be flexible.
The more flexible you are when making the design, the more flexible you are to adopt new ideas and new ways of communicating, the more likely you are to create a useful product. For those who looooooooooove process, then what you need is a process for injecting flexibility into your system when metrics demonstrate a communication problem.
The number one task of any business is to make money. The number two task is to improve inter-departmental communication. After that, all problems can be solved.
I've always said, the most important skill a technical person can posses is the ability to communicate... you might not have a remarkable impact on any one feature, but you'll be better positioned to understand the whole problem, and the whole solution. Talk with your peers, and make sure that the lines of communications are 100% open across divisions... Especially divisions that hate each other. Make sure people feel connected, and that they can trust the opinions and needs of others.
Only then will a committee be able to design a system less dysfunctional than itself...
You know the phrase... The universal sign of such tremendous apathy, that you don't even care enough to say a real word... Until now, that is, because in 2009 "meh" will officially be in the dictionary.
Hurray! New words being born! A chance to celebrate!
Of course, the linguistic circle of life dictates that other words must die so new ones can be born... As others have noted, two unfortunate deaths include "anticipate" and "irony". The former being used in place of "expect," and thus losing its individual meaning, and becoming a dead word. Irony, of course, has been misused in place of "coincidence" and "odd" for so dang long I'm surprised when somebody actually uses it correctly...
Example... Assume some guy named Turd Ferguson is a college football player. He's exceptionally good, and is drafted to be in the NFL. However, in his very first game, he gets tackled hard, breaks his knee, and ends his career. That's not irony: that just sucks.
Next, assume Turd has a son: Turd Ferguson Junior... who is also an exceptional football player, and who also winds up n the NFL. However, in Junior's very first game, he too is tackled hard, he too breaks his knee, he too ends his career. That's not irony: that's just coincidence.
Now... assume that instead of getting hurt in his first game, Junior tackles somebody named Mikey... Mikey breaks his leg, and Mikey ends his career. NOW its ironic! It would be even more ironic if Mikey's father was the man who initially broke Turd Ferguson Senior's knee twenty years ago...
But does anybody care about the proper use of "irony?"
The Trappist Monks are widely known for making the best beer in the world... most of it is made in monasteries located in Belgium, using centuries-old processes. These Trappists are the quintessential monks: quiet, pious, hard working, and frugal. And man, they make great beer!
Contrary to popular belief, Trappist monks do not take a vow of silence; rather they vow to speak only when necessary. Which, surprisingly, is almost never.
To gabby outsiders, they speak so little that we just assumed many of them took a vow to never speak... Who wouldn't be making idle chit-chat while brewing a batch of awesome beer!?!? The alternative was just too bizarre for us to comprehend: speaking is rarely necessary, even when creating a world-class product.
And this rule applies for people who aren't even monks. Let me explain...
Last year I was introduced to a technique called Non-Violent Communication, which had many excellent suggestions for effective dialogs and running productive meetings... They stressed that besides empathy, the most important communication skill is brevity. Specifically, you should limit yourself to no more than 40 words before coming to an actionable request.
State your case quickly, express what you need briefly, and then make a positive request of one specific person.
For example, a manager shouldn't just call a meeting, let out a deep sigh, and lament about how the sales numbers really suck this quarter... and then go on and on and on about what's wrong. That's a waste of everybody's time. I'm certain your employees already know that the numbers are bad. I'm certain your employees already know they "shouldn't drop the ball next time." They know, they know, they know... and now you just wasted an hour of everybody's productive day.
They don't care about what they shouldn't do... they only care about what they should do in order to move things forward.
Instead, try to realize that speaking is rarely necessary, even for a world-class team. Get clear on what it is that you need... determine specifically what actions need to be taken, and by whom. If you don't know what actions need to be taken, then your request would be to have your team help you find out. State your case in 40 words or less, make your request, and move on to new business.
Naturally, there are some concepts that are difficult to explain in fewer than 40 words. In those cases, you should write a report. No, I don't mean a mind-boggling array of Power Point slides... I mean a real, honest to god report. Publish it to your content management system, where it is widely available, then share it with your team. That way, your request would simply be "Please read my report, and send me your comments." Otherwise, the data will still be there in the future, on the off chance that anybody needs it.
Verbose information belongs in a published report, or a wiki... not in an email, and not in a meeting. Keep this in mind at all times, or you'll never make anything as good as a bottle of Westvleteren 12...
When there is a lack of unified purpose, information sharing leads to chaos... and sometimes can cause more problems than it solves. To illustrate this point, I'd like to share the legend of King Ammon.
In a dialog between himself and Phaedrus, Socrates told the tale of king Ammon. He was a wise and just ruler, and all the gods admired him and his virtues.
One day, Ammon was met by the Egyptian god Thoth, who was an inventor, and the "scribe of the gods." Thoth admired Ammon, and wanted to share his inventions with Ammon and all his Egyptian subjects. Ammon was impressed with most of the inventions... except for one: writing.
Ammon was not a fan of writing... and chided Thoth for creating it:
What you have discovered is a receipt for recollection, not memory. And as for wisdom, your pupils will have the reputation for it but not the reality: they will receive a quantity of information without proper instruction, and in consequence be thought very knowledgeable when they are for the most part quite ignorant. And because they are filled with the conceit of wisdom instead of real wisdom, they will be a burden to society.
Hmmm... so Ammon feared what would happen if somebody read something, didn't understand it, quoted it anyway to appear wise, but in actuality had no real wisdom... and in doing so became more powerful, perhaps even respected, so that people even followed him... but because he only appeared to be wise, he made bad decisions, and ultimately became a significant burden to his fellow men.
gee... sound like anybody you know?
Naturally, we only have this great story because of the written word... so nobody would go so far as to claim that writing is bad. However this legend does bring up a valuable point for knowledge management systems:
We should NOT focus on sharing information; we should focus on teaching knowledge.
You shouldn't just dump data to a blog and expect people to read it... you shouldn't dump half-baked documentation into a wiki and expect others to maintain it... you shouldn't just deploy an enterprise search or ECM system, then allow it to become a dumping ground for "data."
What we need are systems that teach; not systems that share. Because without that context, without teaching, and without experience, sharing information could very likely lead to problems...
...and it might actually make you a burden to your fellow men.
You may have been one of the 2 million people who viewed Dr. Randy Pausch's "Last Lecture" on You Tube... its about happiness and achieving your childhood dreams. He put it together a month after he found out he only had 6 months to live, and its one of the best lectures I've ever seen:
Sadly, he just passed away today. Rest in peace, big guy... the world is genuinely better because of you.
Every wonder why erroneous loudmouths get more airplay than the rest of us? I'm not talking about radio shock jocks, or political pundits, but technology bloggers as well. When you let your emotions run wild, and make crazy (probably false) posts, you usually get a bigger fan base.
Why the heck would that happen? Why do blogs with valid, rational discourse languish, whereas those who are wrong, wrong, just plain wrong get lots of viewers and comments?
Jeff Atwood over at Coding Horror had a recent epiphany along these lines. His technology blog is a little haphazard, filled with lots of good nuggets -- as well as plenty of corrections in the comments. Jeff conforms to the philosophy of strong opinions, loosely held. He says he's not an "expert," he's an amateur. But since software is such a new industry, pretty much everyone is an amateur... And, unlike most folks in the software industry, he's not afraid to admit it.
I think it goes a bit deeper than that...
When Jeff says something that is just plain wrong, it makes people angry, which makes them do something. They try to be the first to correct him in the comments, or it starts a conversation on other blogs that link back to him. His writing is humorous, and I've linked to some of his more controversial posts (such as Rails Is For Douchebags), but that doesn't mean his opinions are valid...
You don't get a popular blog by being correct: you get it by being wrong in a way that makes people react. If you were right, you'll get a decent reputation, but your zone of influence will be smaller. People easily forgive you for being wrong... but they never forgive you for being right.
Linkbait, flamebait, trollbait, whatever you want to call it... it works wonders to boost popularity.
UPDATE: Just to be clear, I like Coding Horror, and I mentioned in the comments below. I just wanted to make the observation that generating an emotional response seems to be the better path to blog popularity... for what its worth.
A few weeks ago I gave a talk about Communication For Geeks at the Minneapolis MinneBar conference. I strongly believe that the majority of software failures are communication failures, and if geeks want to be a part of fun, successful projects, they had damn well better learn how to communicate... because most managers clearly can't.
It was a surprisingly popular talk: I had twice as many attendees as I had handouts...
Anyway, on Friday, I got an interesting call from one of the attendees, Kelly Coleman. He was excited to tell me about a situation where he used one of my tips to better communicate with one of his friends... Kelly took to heart one of the most important lessons geeks need to learn: use empathy before education! I was really happy to hear about it, so I though I'd repeat the lesson here in case others might benefit:
Empathy is not Sympathy!
A lot of people confuse empathy and sympathy... I do it myself a lot. Sympathy is feeling what somebody else feels through you. When you are being sympathetic, you're not really helping much, because you're making the situation about you... In contrast, empathy is feeling what somebody else feels through them. You keep the focus on them, until you're certain they've expressed themselves fully.
To illustrate, the following would be sympathy:
Bob: I just got fired... Joe: Wow, that sucks... but don't worry, you'll be fine! I got fired a few years back, and there's always work available for talented guys like us, right?
Joe genuinely thinks he is being helpful... Joe is not being helpful! Joe isn't listening to Bob at all. Joe is rambling on about his own past, and about his theories of the job market. He's trying to connect with Bob, but he's using sympathy. Sympathy is dangerous, because it leaves Joe open for this:
Bob: What the hell do you know, Joe? That was years ago! You didn't have a house! You didn't have a wife and a kid to support! The job market was completely different back then! You have no clue about my problems! Get the hell away from me! Joe: ...I was only trying to help...
Bob is clearly in a lot of pain. He's afraid of a lot of things, and his good buddy Joe clearly isn't listening. So Bob lashes out, and wisely tells Joe to get the hell away from him. Then Joe gets defensive, and says something even stupider. With luck, they'll be friends again in a few weeks... but you never know.
In contrast, empathy almost always is better... it would look something like this:
Bob: I just got fired... Joe: Wow, that sucks... you must be feeling pretty scared right now, huh?
Ding! Ding! Ding! Ding! Give Joe a cookie!
See the difference? Joe didn't make it about himself... he kept his focus on Bob. He asked Bob how he was feeling, and after Bob answers, Joe should keep asking. He should let Bob vent about his situation: his wife, his kid, his house, the job market, whatever. Even if Joe knows a guy who might give Bob a job, Joe should shut the hell up until Bob's finished venting. This may only take five minutes, or it might take a whole hour. Either way, its an important part of the process. Bob will not listen to what Joe has to say, unless Bob feels Joe fully understands his situation.
Empathy before education. Always.
How does Joe know when Bob's finished venting? He'll hear something different in Bob's voice: hope. When Bob is open for suggestions, he'll say something like, "what do you think I should do?" or "have you ever been in this situation before?" Only after Joe hears this, is Bob ready to listen to new ideas, new possibilities, and new ways of fixing this problem. Only after Joe hears hope, or a direct request for help, is Bob ready to hear what Joe wants to say. If Joe wants to help Bob, Joe needs patience.
Now... empathy is not easy, and its extraordinarily difficult for engineers.
Most technical people have been brainwashed by years of "education" into believing that there's a "right way" to do everything, and that its our job to fix it. When something is "wrong," we want to dive in and tell everybody how to make it "right" again. Its a trained compulsion. This is why engineers make lousy lovers, but excellent terrorists. In both cases, its a lack of empathy that dooms us to this fantasy world of absolute right and wrong, making it impossible to see things from another perspective.
Sound like anybody you know?
As such, it will be difficult for software engineers to learn empathy... but they needs lots of practice before they can move on to even more advanced forms of communication... which I'll be talking about on a later date ;-)
Should We Always Use Empathy?
That's a tricky question... empathy takes a lot of time, and sometimes you don't have the luxury. However, it is important to understand what empathy is, so when people "fall off the wagon" you won't take it personally...
For example, blogger James McGovern decided to practice empathy which got some props from Billy... however, James' blogging style does not lend itself well to empathy... He's snarky, and enjoys to inciting fights, so he can better understand who has a better position. If everything were puppies and rainbows, I'd probably stop reading his blog. No surprise that James then went back to his old self after about 23 hours...
And lets not forget Broc Samson's mystic journey in Venture Brothers... where in a dream he learns the value of empathy and feels great... but then is confronted by his former special ops trainer:
Broc Samson: What about uhhh, humanity and empathy and all that garbage? Hunter: You're a tool, boy, a tool! Built for a single purpose by the United States who shut your third god damned eye for a good f$%&ing reason! You can't teach a hammer to love nails, son. That dog won't hunt!
Yep... an army of empaths sure would be cool... but in the meantime, we live in a world of conflict... so until everybody understands the power of empathy, its probably best to know multiple ways to deal with conflict. In order, I prefer empathic communication, principled negotiation, then Broc Samson.
In the meantime... practice giving and getting empathy. Its far more powerful than you realize.