Jim Wilt and I had an interesting discussion today, around the role of software architecture in the current economy. I shared some thoughts around something I’ve been thinking that I call “micro architectures” (for lack of a better name).
Let me start with a personal dilemma: I’m debating moving my blog (currently running an old version of Community Server) to something different (either a different provider or upgrading to the latest version of Community Server). Although it’s been very reliable, the thing that concerns me about my blog is that I don’t intimately know how it works. I’ve looked through a lot of the forums, and even other open source blog providers, but the architecture for everything that I’ve seen so far seems just too unwieldy for what I’m trying to accomplish.
While searching, I began asking myself the question – “Instead of the most architecturally correct design, what would be the smallest design that supports my need? And more importantly, how would these two be different?” Small in this instance refers to the number of modules, configuration files, lines of code, and other parts of the design. I think as architects and developers we have a habit of defaulting to configuration files, extensibility, and dependency injection into our designs from day one – even though the core use cases of the design don’t immediately demand it. We design too much in for the future or for edge cases which ends up in “I’ve abstracted this setting into this_obscure_setting_config.xml just in case we need to switch the setting in the future”. Nice extensibility – but will anyone ever actually switch that setting? Really? And if someone did, does a recompile of the code really add that much headache over the additional abstraction and testing required for the extensibility? Jeffrey Pallermo covers an element of this recently in his post about hard coding.
Coming back to my blog example, what would a “micro architecture” for my blog look like? I would assert that I could do the following:
Eliminate Elaborate Database Access Code. Do I really need it? Do I really need a database abstraction layer (myDal) that inherits from an interface (IDal), uses a configuration file (database_config.xml) and some dependency injection under the covers so that I can switch out the driver at some point in the future? Probably not.
Question the Need for a Database. Talking of which, do I actually need the database itself? Access to the database (or lack thereof) seems to be the root cause of issues that I have when my blog goes down. Two primary considerations for using a database are performance and indexing. Performance? I would like to think that millions of people visit my blog every day, but the reality is somewhat different. Even with 50 comments attached to a blog post, a file system solution would probably perform well enough for anyone reading the blog. Indexing? Sure, I would like search enabled on my blog, but why not just redirect to (or embed) an existing Google search, parameterized to my domain?
Create a Minimal User Interface. I got thinking about what HTML controls I would need to supply to enable updates and edits to posts – the question is, do I really need a fully functioning Admin UI to update the blog? Would it not be simpler to only expose a MetaWeblog or ATOM publishing API instead and use something like Windows Live Writer to create and edit my posts?
No Admin UI for Creating “About” and Other Pages. Again, do I really need the administration overhead for handling this? Can I not just create a new .ASPX or PHP page and attach it to the site. Seriously?
Remove Skins and Styles from the Code. No brainer. Reference a CSS and be done with it. The blog’s responsibility should be to only output well formatted HTML that can be styled with CSS.
I’m sure there’s more that I’m missing, but hopefully you get the idea. To sum this up and conclude, I would argue that a “micro architecture” could have the following principles:
It’s OK to ignore edge cases. The architecture is designed only against core use cases, and nothing else. With the exception of input validation, edge cases are not considered.
It’s OK to write code – as long as that functionality doesn’t exist in another solution that can be reused. Subsystems are written only when there is not a valid external solution that can be used.
It’s OK to hardcode configuration values. Hardcoding is OK for core use cases (providing that it doesn’t invalidate security – for example, you don’t want to be hardcoding usernames and passwords, of course)
It’s OK to recompile. Recompiling is really OK if edge cases are introduced at a later point in the future. I actually think this is healthy because it encourages developers to open up the solution (and possibly improve the solution as a result of what they’ve learned since they last wrote the code).
It’s OK to unit test. Because a greater focus is given to the code of an application (as opposed to 50 million different configuration files), unit tests and test driven development become even more important.
Maybe I’ll actually try this out and see what happens?