By Chris R. Chapman at December 15, 2007 04:46
Filed Under: Blog

Here are a collection of ten short tips I’ve compiled from my experiences blogging over the past few years that new and even intermediate bloggers may find useful.  This isn’t anything new, but rather some common sense observances.

  1. Choose a secure password for administration and change it every so often.Keepass
    • Personally, I use KeePass to randomly generate my passwords and keep track of them for me;  I then back up my passwords database to a USB key for extra protection.  In this day and age, using passwords that can be guessed or compromised using social engineering is just so not cool.
  2. Keep backups of your posts using an RSS reader.
  3. Use CAPTCHA to protect your feedback list from spam;  better yet, use reCAPTCHA.
    • New bloggers are often dismayed when they find their posts hijacked with feedback spam containing URLs for pills, potions and other nefarious things.  Often this can be avoided by using a CAPTCHA plugin that prevents spambots from entering comments through a visual challenge (often called a Turing Test).  While good, CAPTCHA has proven to be hackable (see my old blog, for example).  I recommend the upgraded reCAPTCHA plugin that not only stymies 99.9999% of spam, but also helps digitize books.
  4. Close off comments for posts after 60–90 days.
    • This works in conjunction with #3 above to reduce your overall “attack surface” – this isn’t to say that spammers won’t try to coerce their comments into your blog by other means, but it does go a significant way to limiting their impact almost entirely.  If you need to have more feedback for longer periods, either write a new post to extend the conversations, or better yet:  Set up a Yahoo! mail list or install a forum.
  5. Disable trackbacks for posts.
    • Trackbacks are automatic cross-referencing flags that blogs and sites use to notify one another when they’ve been referenced directly.  A good example of this is dotNETKicks, where if your post is “kicked” into their queue, it sends a trackback to your blog saying “you’ve been kicked” which gets rendered in your feedback list.  While a cool feature, it’s also a huge back door for spamdexing.  By disabling this, you’re not really going to miss much – if you use a feed service like FeedBurner, you’ll know about other people referring to your posts.
  6. If you’re using ASP.NET, make sure your pages are protected with validateRequest=”true”
    • Yes, it seems obvious, but you’d be surprised at how many folks disable this, leaving their site open to playback and injection attacks.  I see attempts on this all the time in my security logs for my site.
  7. Disallow <a> tags in your comments
    • It’s a bit of a pain for those wanting to hyperlink away from a comment to check something out, but it also prevents a lot of link-harvesting, which results in just more spam and index climbing.
  8. Throttle bad IP ranges.
    • Often, you will notice attacks coming from common “ranges” of IP addresses – these are “open relay” proxies that spamdexers use to propagate their wares.  Either block these at the firewall (ie, stop all inbound traffic from 192.168.199.*) or use an httpModule/httpHandler combination (ie, UrlRewriter or similar) to block troublesome IP ranges and send out a “forbidden” 403 code, or similar.
  9. Subscribe to a spam filtering service like Akismet.
    • If you’ve got a bad case of spamming, consider using a service like Akismet that can provide fairly comprehensive comment and trackback spam filtering.  Some blog services offer this as a matter of course;  others require you to actively enable the service.  See Scott Hanselman’s post, Preventing comment, trackback and referral spam in dasBlog, for details on using Subkismet.
  10. Be vigilant.
    • Check your security logs frequently for spotting trends that don’t fit your usual daily traffic patterns;  monitor your comments lists via a feedreader to easily identify attacks;  keep looking for areas where you can “harden” your blog either with updates and patches or small code fixes.  If you have the ability, make the fix on your own rather than wait for someone else to do it for you.

Of these tips, #3 has been most successful for me to-date.  There may come a time when even reCAPTCHA is foiled, but for now it’s working wonders!

Additional References:

 

By Chris R. Chapman at December 12, 2007 23:07
Filed Under: hacks

As promised, I’ve set up a wiki to help aggregate the information gathered so far in my efforts to black-box reverse engineer the Tassimo T-DISCs!  You can find the main wiki page at http://blog.chapmanconsulting.ca/wiki and the top page for Tassimo Hacking at http://blog.chapmanconsulting.ca/wiki/Tassimo%20Hacking.ashx

The pages are definitely works-in-progress, and best of all:  You can create an account to help out!  In fact, I’m really hoping that there will be some collaborative input to build the pages out into a hacking Tassimopedia.

I’ll be updating the pages on and off for the next while as I have a whack of other articles to finish and, of course, X-Mas errands to run!

By Chris R. Chapman at December 11, 2007 23:59
Filed Under: amuse, software development

Colorful cartoon caricatures of common coding calamities.  See all 10 here.

Monsters

By Chris R. Chapman at December 10, 2007 23:14
Filed Under: alt.net, software development

Yesterday I posted a fun list of possible Latin mottos for the fledgling ALT.NET “movement” over on the Yahoo! group mailing list – one of them, Aut disce aut discede (either learn or leave) was picked up by fellow ALT.NETter Scott Reynolds and served to inspire his post of the same name.  In it, he makes an eloquent if heavy-handed case for either being part of the solution or part of the problem when it comes to improving software development:

This is the most honest statement about software developers (and people in general) that I can make:  If you aren't willing to learn, you are obsolete and utterly useless.

New and unknown things trigger our fight or flight response just the same as if a bear came crashing through the woods at us.  What will differentiate you from the next guy is whether you choose to adapt, improve, and grow with new challenges or put your head in the sand.

I can’t say I really disagree with this – for too long, our industry has been getting by on its good looks, so to speak, and we’re now well-mired in our own technical debt because we’ve become complacent about continually improving our game.  Worse, we’re failing at preparing the next generation of developers for turning this situation around.

However, Scott and I begin to part company when it comes to the motivations for remedying this debt:

A lot of lashback for alt.net right now is coming from a position of fear.  A position of not wishing to come out of one's comfort zone.  This is the antithesis of what we should be about.  We should be out there pushing the envelope and making ourselves, our peers, our companies, our products, and our world better.  Chad touched on this in his post about professional responsibility.  I left a comment that I thought I would share to a greater audience.  It's a quote that Scott Bellware twittered one day a couple of weeks ago:

"Society runs on software.  Programming is a social responsibility".

While I’m in total agreement about pushing the envelope, I disagree entirely that “programming is a social responsibility”.  While a convenient metaphor, society does not run on software – it utilizes software.  Therefore (and with all due respect to Messrs. Bellware & Reynolds), programming carries a professional not social responsibility. 

Despite what Trinity is trying to tell you about following the white rabbit, society isn’t a computer construct or even a remote analog.  It’s about people and their interactions, shared belief systems, goals, aspirations, security, life, liberty, the pursuit of happiness, etc.  While software does play an important role in our daily lives, and can affect such things, it is a tool and not embedded firmware for people.  Modern society’s “software” is far older and has undergone much more rigorous refinement (and is still imperfect):  It encompasses fundamentals such as a constitution, the Rule of Law, right of habeas corpus, systems of governance, democratic elections, and so on.  Add a dash of free will and you have a party.

By way of contrast, software development is a skill and is thus practiced in either a professionally responsible or irresponsible manner.  This isn’t a thinly-veiled argument for small-tent elitism – all it takes to be professional is the desire to continually improve oneself through learning best practices and trying to implement them. 

For over three decades, our industry has practised irresponsible software development because of flawed practices that were co-opted without taking the time to understand their source.  Case in point:  Waterfall/BDUF – the “ground zero” from which all manner of recognized worst practices were spawned (and ironically, all best practices in response).  As a result, we’ve been left with, as Robert N. Charette notes in his 2005 IEEE Spectrum article, Why Software Fails, a legacy of epic, yet preventable software failures estimated in the tens of billions of dollars:

Worldwide, it's hard to say how many software projects fail or how much money is wasted as a result. If you define failure as the total abandonment of a project before or shortly after it is delivered, and if you accept a conservative failure rate of 5 percent, then billions of dollars are wasted each year on bad software.

For example, in 2004, the U.S. government spent $60 billion on software (not counting the embedded software in weapons systems); a 5 percent failure rate means $3 billion was probably wasted. However, after several decades as an IT consultant, I am convinced that the failure rate is 15 to 20 percent for projects that have budgets of $10 million or more. Looking at the total investment in new software projects—both government and corporate—over the last five years, I estimate that project failures have likely cost the U.S. economy at least $25 billion and maybe as much as $75 billion.

Charette attributes the industry’s high failure rate to a dozen key factors:

  1. Unrealistic or unarticulated project goals
  2. Inaccurate estimates of needed resources
  3. Badly defined system requirements
  4. Poor reporting of the project's status
  5. Unmanaged risks
  6. Poor communication among customers, developers, and users
  7. Use of immature technology
  8. Inability to handle the project's complexity
  9. Sloppy development practices
  10. Poor project management
  11. Stakeholder politics
  12. Commercial pressures

All of these should be familiar to anyone who identifies with the core ALT.NET values – they are our raison d’etre.  They are what we continually strive to improve.

So what does this have to do with me, John/Jane Q. Developer?

Everything.  As the saying goes, “the buck stops here”.  You are the “pointy end” of the stick – irrespective of whether you’re coding the control systems for the space shuttle, imaging analysis for medical diagnostic equipment or a simple ASP.NET web app that renders a report – regard your work as mission critical and develop accordingly

As a software developer, your professional responsibility (and thus imperative) is to challenge the conditions of IT project failure such as those that Charette identified by making incremental changes in your development habits and skills, and putting them into practice, and thereby influence change in your colleagues.  In sum, to do as Ken Schwaber suggested when he was speaking in Vienna about what it takes to move toward agile software delivery: "Don't procrastinate; do something - no matter how small."

By Chris R. Chapman at December 07, 2007 23:03
Filed Under: webtools
Update:  Well, it didn't take long before someone coded up an ASP.NET wrapper control around the chart API.  Check out Christopher Pietschmann's blog for an overview on how he did it.  Nice work!

Via dotNETKicks, I learned this morning that Google has put out an API for rendering chart images to the browser by using URL arguments, in much the same manner as Edward Tufte's SparkLines concept.  Whereas Sparklines are intended for presenting "small, word-sized" graphics, like stock charts:

sparkline_graphs

Google's API permits the creation of larger charts of various types, with an array of graphic effects:

The above chart was created using this simple URL:

http://chart.apis.google.com/chart?
chs=200x125&chd=s:helloWorld&cht=lc&
chxt=x,y&chxl=0:|Mar|Apr|May|June|July|1:||50+Kb&
chf=c,lg,90,76A4FB,0.5,ffffff,0|bg,s,EFEFEF

chs: Chart Size
s: Data Series - in this case, sample data from Google;  this can be encoded data for small or large datasets
cht: Chart Type - lc = line chart
chxt: Required axes labels for chart
chxl:  X and Y axis labels
chf:  Chart Area Background Fill - in this case, we've applied a horizontal, gradient fill

Here's a 3-D Pie:

And the URL that was used to create it:

http://chart.apis.google.com/chart?
cht=p3&chs=600x270&chd=s:Hellobla&
chl=May|Jun|Jul|Aug|Sep|Oct&chco=0000ff

The API has a whack of options for making all manner of charts, including scatter plots, 3-D pies, Venn diagrams, and more.  I wish this existed when I was working on a Vista Gadget last year that required extensive charting capabilities - this would have solved a lot of frustration using a third-party Javascript library...!

I can think of a number of applications where this could be used to quickly generate off-the-cuff data visualizations with very little overhead (besides network latency) - it would be great to hack into the back-end of this blog, for example, to provide graphic representations of hits, etc. like a mini-FeedBurner console.  It could also be used to provide visualizations of search results, hits, best matches, etc.  Maybe even an alternative visualization of a tag cloud.  It's only a matter of time before coders get creative and think of ways to make the API sing - the Venn Diagram chart, for example, presents some interesting opportunities:

Check it out - it's a bit of fun for your Friday afternoon!

About Me

I am a Toronto-based software consultant specializing in SharePoint, .NET technologies and agile/iterative/lean software project management practices.

I am also a former Microsoft Consulting Services (MCS) Consultant with experience providing enterprise customers with subject matter expertise for planning and deploying SharePoint as well as .NET application development best practices.  I am MCAD certified (2006) and earned my Professional Scrum Master I certification in late September 2010, having previously earned my Certified Scrum Master certification in 2006. (What's the difference?)