How to Stay Agile? Avoid Over-Engineering Software!
Posted on November 14, 2013
A software architect must take into account many known factors and still stay agile. At the same time, she must pave the way for scalability, security and more features to come. As the quote goes: “good judgment comes from experience, experience comes from bad judgment”. Over-engineering software projects is very common. In tune with the start-up culture and lean development principle of eliminating waste, I argue that over-engineering is just as harmful as failing to deliver.
Over-Engineering That Prevents You From Staying Agile
Below I will present a few over-engineering sins I’m guilty of and the impact they’ve had on the project.
The initial requirements of the project described an online video-streaming component. It had to be embedded in a turn-key video portal product. That meant it will have a lot of direct clients and a great number of end-clients. Clients buying the portal script were expected to want custom front-ends in the form of custom color schemes or skins.
The list of features was fairly long. And all features had to be customizable from external files.
Retail licenses for this product provided open-source access for about 95% of the codebase. That meant that clients could modify their own solutions. Knowing that the architecture had to expose the simplest possible interfaces.
Above all, this was a product designed to be a capital generator for years to come. Therefore, it had to support a fair number of revisions and additional features as the market demanded new functionality.
So let’s look at the 3 decisions I could have lived without.
1. Presentation Model Layer
The presentation model provides a separation of application logic from the views. Basically, there should be absolutely no functionality implemented in the views. They just display data and provide inputs. All communication between a view and the application should be done via the presentation model.
Looking at the project documentation above, we see that a lot of changes were foreseen for the user interface. The presentation model was an obvious choice.
- Makes it easy for a junior developer to modify a view without breaking application logic.
- Maintains a clear grouping of features needed by each functionality (See the Facade Pattern).
- Introduces a significant quantity of boilerplate code that simply carries fields from the model to the views.
- Requires design decisions for each feature. Like introducing a new presentation class or re-using an existing one (does that violate SRP? does that violate Open-Closed?). All that takes time and skill to properly decide.
So why didn’t it pay off? As expected, some clients did make their own changes to the application. However, hardly any had problems in doing so. I suspect they wouldn’t have had any if the presentation layer wasn’t there, either. Furthermore, it increased development time because of all the additional boilerplate code and decisions that had to be made.
2. Service Delegation Layer
The delegation pattern provides an easy way to change the implementation of a service without modifying client code. It makes sense to use this technique for services you expect to change over time. You might think a data access class will change at some point as you foresee changing the SQL database with a noSQL one. Maybe the mailing service will change from POP3 to IMAP and so on…
Having developed streaming clients in the past, I knew there will be more similar features in the future. The idea was to implement a reusable service layer for other projects. So, I assumed that this additional layer would be a brilliant idea in that respect.
- It offers a quick and clean way to change the implementation of services.
- Introduces boilerplate code to simply plug the public methods from the service to the client.
Using this pattern for the services in my application didn’t do any damage, but it didn’t do any good either. As it turns out, the services were so stable, I never touched them after the initial release. Furthermore, clients didn’t need changes to the basic functionality and if they did, it would have been just as easy to change the service implementations from the framework’s bean injection settings.
3. Global Event Dispatching
Flex heavily relies on events and Swiz actually provides a global dispatcher that can be injected anywhere. The reason this is a great thing to do is it encourages loose coupling. Events allow complex behavior without carrying a ton of parameters everywhere.
- Encourages loose coupling.
- Allows good separation of logic into appropriate handler classes.
- It can get hard to track all handlers.
- Multiple handlers for the same global event may lead to inconsistencies in the model due to concurrency or priority issues.
- In languages that don’t have a native event handling feature, it may be difficult to implement a thread-safe solution.
The reason why this design decision didn’t pay for itself is probably overuse throughout the project. A good old fashioned method call can save a lot of time and headaches in trying to come up with a design that completely eliminates coupling. So I would say the global event handler is a good pattern to use but in moderation. Overuse can make life hard for maintenance and development by hiding application logic in multiple handlers. Also, depending on the environment and implementation, global event handling may get you in trouble with thread-safety.
Now, I’m not saying the above mentioned architectural decisions never pay off. In some cases, they are mandatory. However, the cost/gain ratio needs to be carefully analyzed before cementing this decision in thousands of lines of code. Lines you can’t afford (or are too attached to) to replace. As a rule of thumb, always resort to KISS.
As Robert C. Martin states in his book “Clean Code“, big design upfront (BDUF) inhibits adapting to change. Thus, to be agile and responsive to dynamic requirements, one must start with a simple, but well-decoupled architecture. In software, we benefit from flexibility in building our systems. A flexibility that other craftsmen don’t have. It comes from the ability to refactor large portions of our systems and massively alter their previous architecture at any stage of completion. (Provided you have enough tests; See Test Driven Development).
However, we are inclined to keep useless modules and relationships just for the sake of not throwing away previous work. This inclination tends to degrade readability and maintainability. The situation is called technical debt. The interest it carries is decreased productivity and increasing complexity in the form of hacks and workarounds.
To avoid this trap, keep costs low and maximize agility, teams must only implement what is needed, when is needed and choose the right boundaries for decoupling. Over-engineering is counter-productive and limits future directions.
There is this TED talk covering “intelligence” and it basically says “intelligence is correlated to the maximization of future freedom to act”. So do your future-self (and your team) a favor and start small. Thus, you have the freedom to change anything and everything as needed!