Friday, December 21, 2012

A Nod to Professionalism

When did our industry make the transition to one where foul language, off-colored women-demeaning jokes, and illicit drug glorification became a respectable and celebrated medium for talking about software practices?  Did I miss something?  An announcement or white-paper suggesting the new appropriate way to present to our industry?

I attended the software craftsmanship conference in Chicago last month, and while I enjoyed the conference and found most of the talks very enlightening, I was disappointed with the pervasive lack of professionalism.  Apparently the F-bomb is the new vogue for software craftsmanship because it seemed like about every third speaker felt it necessary to scatter it throughout their presentation.  Also prolific was the inclusion of hard-core drug references.  I didn't realize that was an important part of software craftsmanship.  I recognize that this conference is less formal than some, and a noticeably smaller group than others, but I wasn't the only one who was dismayed by the nature of some of the talks.

The conference had time allocated for lightning talks, and during a break someone gave a lightning talk on this subject.  He talked about the need for professionalism in our industry and lamented the lack of professionalism at the conference.  He specifically mentioned an off-colored, demeaning joke made by one of the presenters.  After his lightning talk I went to shake his hand and tell him I appreciated his courage to stand up for what he felt was important, but I was surprised and pleased to see that I had to wait in line behind several others who were doing the same.

I realize that I am more sensitive to this sort of thing than some and I recognize that others have a different background than I do and there were many who found no offense in the material.  But I wanted to be a voice for those who did.  And hopefully someone, somewhere, will see this post and realize as they prepare their presentation that, while there are many who will not find offense in R-Rated material in a conference talk, there are also many who will.  

I do feel it is important to maintain a decent level of professionalism in our craft.  I hope that this was an anomaly and not the beginning of a new trend.  

Monday, October 1, 2012

Right-Sized Methodology

In a recent agile round-table meeting that I attended in Salt Lake, Mike Clement (@mdclement) suggested that we often tend to lose focus of the original goals of Agile as laid out in the Agile Manifesto.  I have to say, I love agile, and I particularly love XP (which actually pre-dates Agile). And, to be totally frank, deep down, I am probably of the opinion that the XP principles are critical to Agile success and possibly superior to other methodologies.  But that kind of makes me part of the problem.  One of the agile principles in the manifesto states that we should value "Individuals and interactions over processes and tools".  So to say that XP is superior seems to be more focused on the process than on what best facilitates a focus on "individuals and interactions." And even once a methodology is chosen it needs to be constantly reviewed and adjusted.

The truth of the matter is that teams should create a methodology that fits the unique environment they are in; and that methodology should focus on realizing the most possible value from the ideas in the agile manifesto.  Every software team exists in an environment that is unique to their team; that uniqueness comes from a host of different variables including things like:  company size, team size, company and team values, individual personalities, company goals, etc.  And that means that the methodology they develop (and continue to refine over time) should be a methodology that is tailored to their unique circumstances.

Our industry is still learning and I think Agile is an important step on that path.  As we come appreciate what agile gives us we do need to be careful that we don't become so focused on our process that we forget about the values.  We need to frequently review how we are writing software and determine if there are unnecessary or less-effective processes that are adding dead weight to our chosen methodology.  The objective should be to design a methodology that is as lean as possible without sacrificing important values.

This is one thing that I love about working at Pluralsight.  Our methodology is the lightest methodology I've ever experienced.  But our approach wouldn't work everywhere; we're lucky that our company environment allows for such a lean approach.  This is due to a few factors, including:  
  • The pluralsight partners are developers, so we speak the same language; 
  • Keith, our IT manager, is not just the CTO but also a partner; 
  • our dev team, for now, is small (4 developers plus Keith, who used to do more development and is now moving more into directing the team); 
  • and our dev team has a lot of experience (we each have 13+ years experience and several years experience experience with agile).  
Our methodology is similar to XP (TDD, pairing, acceptance tests, CI, frequent releases, etc) but much lighter in some areas.  Here are some areas where our environment has allowed us to be more lean:

The Planning Game
Since the person who manages our team also happens to be a Pluralsight partner, he already completely understands the company vision.  He also has been very involved with the software (he wrote the initial software and still gets involved in the code from time to time).  Having someone that understands both sides so well allows us to cut some corners.  This, combined with the fact that our team is still small, has allowed us to completely forgo sizing meetings.  The partners discuss and develop the IT roadmap and in their meetings Keith can provide feedback from a technical standpoint without having to do a round-trip to the dev team. We use a kanban board and we just put the stories on the board and Keith prioritizes them based on company priorities.  When we're done working on a task, we just grab the next task.  And this all works very well without the devs ever having to be involved in a planning meeting.  Normally, this would make me nervous because the developers need customer affinity, but the awesome thing is, we still get that because Keith has the company vision and shares it with us freely in our "stand up" meetings.

Daily Stand-Ups (aka scrums)
We do have them, every day, but stand-ups are called stand-ups for a reason, right?  You're supposed to stand up because it keeps people from going on and on about the details of some problem since people get tired of standing.  Some people might be shocked to learn we never stand-up in our stand-ups.  And some of our stand-ups do go longer than a "status update" meeting should go.  But our stand-ups are more than discussions about status and roadblocks -- because our team is so small, we can afford to talk architecture or business in our stand-ups.  Although there may be some occasions where part of the discussion isn't totally applicable to everyone in the meeting, I find our stand-up meetings to be very valuable.  It is a great opportunity for the team to talk together about important directions and even get into the details of some of the roadblocks.  It is also a great opportunity for Keith to share the business vision and direction changes.  This meeting ends up taking the place of at least 3 meetings (stand-up, architectural design, and parts of the planning game).  And because our team is small, we still manage to typically get it all done in under 30 minutes each morning.  I realize that for some 30-minutes is an awfully long stand-up, but this is much more valuable than just a stand-up.  We won't be small forever, so this may have to change some-day to a more traditional stand-up supported by other meetings, but for now, it's awesome.

Short Iterations
I've worked on teams that have had 1-week and 2-week iterations.  The odd thing about our environment is we don't really even have defined iterations.  I realize this may sound like cowboy coding, but it really isn't.  Because we don't need a planning game (for the reasons discussed above), there is less of a need for defined iterations.  We are able to keep this from becoming a cowboy coding shop because of other practices we follow closely (TDD, paring, acceptance tests, etc.).  Additionally, the partners understand the importance of quality so the devs are completely comfortable (and actually encouraged) to slow down and do it right.  When we finish a task, we push it to our repository, let our CI builds run, and if it is a meaningful enough change we push it to our staging environment where the changes are QA'd and accepted by other developers and once it passes it is pushed to production if everyone agrees its safe to push all the current changes. 

Collective Ownership
In Extreme Programming Installed, Ron Jeffries says "I'm not afraid to change my own code.  And it's all my own code."  This is a benefit that comes from pairing and switching pairs regularly.  When we have to work on code we're not familiar with it takes longer and we're more timid.  Because our team is so small, this one just comes naturally.  We all get into the various parts of the code eventually and the more we do, the more comfortable we are to make changes.

Pluralsight is growing quickly and it seems inevitable that our team will grow too.  As our team grows, we will no doubt have to make changes to our process and add additional pieces that we don't have now.  And that's ok, in fact, that's how it should be -- we should always be tailoring our methodology to fit the current needs of our team.  But for now, I am totally loving how lean we can be.

What are your thoughts and experiences with designing a methodology to fit your needs? 

Tuesday, August 14, 2012

Is TDD a Silver Bullet?

I recently read this post by Jeff Langr and really appreciated the comments made by his previous co-worker, Tim. Tim clearly was not sold on TDD right from the beginning, but his comments, after being encouraged to do it for two years, are insightful.

In the article Jeff asks Tim, "What did you finally see or recognize in TDD that you initially couldn’t see?" The bullet list of reasons why he likes TDD very closely matches my own list.  I also find it interesting that Tim concludes his findings with the statement that TDD is not a silver bullet when he is clearly a full-on advocate of TDD. But it is true what he says, writing software is challenging, even with TDD. In fact, for a while when you are learning TDD, TDD itself can be very frustrating.

Tim mentions in his response to Jeff that it took about 2 years for it to really sink in. I can echo that statement as it seems the same for me. But, I would like to clarify that I was able to write fairly decent tests after a couple of months and was able to do TDD fairly well within a few months, but it really does take about 2 years before you really gain that confidence about TDD where you feel like you really "get it".

Alistair Cockburn talks in some of his books and in this post about the three different levels of learning referred to as Shu, Ha, Ri.  He talks about how, when we are learning new skills, we all pass through these three levels of learning.

During the Shu phase we are building our technical knowledge.  We are trying to learn the rules that define the skill and we find it very frustrating when someone breaks those rules because it is difficult for us to understand why we would in this case but not in another.

During the Ha phase, we begin to reflect on what we have learned in the Shu phase.  Everything we have learned so far has been committed to "muscle memory", as Cockburn states it, and we can now begin to "reason about the background behind these techniques".

In the Ri phase, the student has mastered the skill at such a level that he can begin to "think originally" about the skill and produce new and better forms of the practice.

It seems to me that TDD has an incredibly long Shu phase.  And, with TDD, I think there is a phase, which lasts about 2 months, that predates the Shu phase.  A phase which could be aptly labeled the, "what sort of voodoo magic is this crap?" phase.  For some of us our heads just spin for about 2 months during this phase.  But then, we begin to grasp the basics and we are fully into the Shu phase where we can look at other code samples and reproduce the same effect in the code we are trying to test.

It does seem that we stay in this phase for quite a while, probably around 1-2 years until we begin to really feel we understand and champion the reasons why we are doing it.  Perhaps this is why some teams struggle with picking up TDD, especially if they don't have someone who is already in the Ha or Ri phase to help them along.

So it is true that TDD is not a silver bullet, if what you're hoping for is something that makes software development easy (which is not to say it doesn't make it easier).  Writing software is complex and  sometimes frustratingly difficult even when done right.  But I agree with Tim: it makes software more maintainable, cleaner and better architected; it makes writing software faster; it gives me more confidence; and it is more fun to write code with TDD.  If your team is just starting to experiment with TDD, stick with it even if it seems foreign and more difficult at first.  In the end, I hope you'll find what Tim and I have found...once you get it, you love it.

Thursday, July 5, 2012

Rhino Mocks Isn't Complicated

I've heard multiple people state that they prefer Moq over Rhino Mocks because the Rhino Mocks syntax isn't as clean. I'd like to help dispel that myth. It is true that, if you choose the wrong syntax, Rhino can be more complicated. This is because Rhino has been around for a while and it's original syntax pre-dates the improved Arrange/Act/Assert syntax. I go into a lot more detail on Rhino Mocks and the appropriate way to use it in my new Pluralisight Course on Rhino Mocks, but to dispel the myth that Rhino Mocks is complicated I would like to compare the syntax for creating stubs and mocks using Rhino vs. Moq:

Moq Syntax:
  //Arrange
  var mockUserRepository= new Mock<IUserRepository>();
  var classUnderTest = new ClassToTest(mockUserRepository.object);
  mockUserRepository
      .Setup(x => x.GetUserByName("user-name"))
      .Returns(new User());

  //Act
  classUnderTest.MethodUnderTest();

  //Assert
  mockUserRepository
      .Verify(x => x.GetUserByName("user-name"));
Rhino Stub Syntax:
  //Arrange
  var mockUserRepository=MockRepository.GenerateMock<IUserRepository>();
  var classUnderTest = new ClassToTest(mockUserRepository);
  mockUserRepository
      .Stub(x => x.GetUserByName("user-name"))
      .Returns(new User());

  //Act
  classUnderTest.MethodUnderTest();

  //Assert
  mockUserRepository
      .AssertWasCalled(x => x.GetUserByName("user-name"));
Notice that there are only four differences in those examples: the use of "new Mock" instead of MockRepository, the term Setup vs. Stub and the term Verify vs. AssertWasCalled and the need to call ".Object" on the mock object when using Moq. So why do so many people think Rhino Mocks more complicated? Because of it's older Record/Replay syntax which required code like the following:

using ( mocks.Record() )
{
    Expect
        .Call( mockUserRepository.GetUserByName("user-name) )
        .Returns( new User() );
}

using ( mocks.Playback() )
{
    var classUnderTest = new ClassUnderTest(mockUserRepository);
    classUnderTest .TestMethod();
}
There is also another benefit of using Rhino Mocks. The support for Rhino by StructureMap AutoMocker is cleaner due to the fact that the mocks returned by Rhino are actual implementations of the interface being mocked rather than a wrapper around the implementation which is what Moq provides. AutoMocker allows you to abstract away the construction of the class under test so that your tests aren't coupled to the constructors of the class under test. This reduces refactoring tension when new dependencies are added to your classes. It also cleans up your tests a bit when you have a number of dependencies. If you haven't used AutoMocker, you can quickly learn how it works by checking out the AutoMocker module in my Pluralsight Rhino Mocks video. But here is a quick comparison of using AutoMocker with Moq vs. Rhino:

Moq syntax using MoqAutoMocker:
//Arrange
var autoMocker = new MoqAutoMocker();
  Mock.Get(autoMocker.Get<IUserRepository>())
      .Setup(x => x.GetUserByName("user-name"))
      .Returns(new User());

  //Act
  autoMocker.ClassUnderTest.MethodUnderTest();

  //Assert
  Mock.Get(autoMocker.Get<IUserRepository>())
      .Verify(x => x.GetUserByName("user-name"));
Rhino syntax using RhinoAutoMocker:
//Arrange
var autoMocker = new RhinoAutoMocker<ClassUnderTest>();
  autoMocker.Get<IUserRepository>();
      .Setup(x => x.GetUserByName("user-name"))
      .Returns(new User());

  //Act
  autoMocker.ClassUnderTest.MethodUnderTest();

  //Assert
  autoMocker.Get<IUserRepository>();
      .AssertWasCalled(x => x.GetUserByName("user-name"));
I hope that this clarifies, for some, the correct way to use Rhino Mocks and illustrates that it is every bit as simple as Moq when used correctly.

Monday, May 21, 2012

SOLID Code for SOLID Reasons

We should write good code because good code is easy to maintain, not because it makes the code easier to unit test. However, it just so happens that well written code is easy to unit test; and testing our code, especially test-driving our code, helps us to write good code.  But ease of unit-testing is not the only reason for writing good code, in fact it is one of the very last reasons.

So how do we define good code? I think a great starting point, at least for OO code, are the SOLID principles of object oriented design defined by Bob Martin and the concepts of bounded context and anti-corruption layers defined by Eric Evans.

Wikipedia has a great overview of SOLID, or if you prefer, here it is from the horses mouth. SOLID is an acronym for the following 5 principles: Single Responsibility Principle, Open/Closed Principle, Liskov Substitution Principle, Interface Segregation Principle, and Dependency Inversion Principle.  If you are not familiar with SOLID, I'd recommend you read that now.

Eric Evans recommends in his book, Domain Driven Design, that we define the various Bounded Contexts that exist in our code.  Typically when we work on a project, we have a code base that we work on most often and then there are the external systems with which our code communicates.  That is an example of bounded context -- with our traditional code base being one bounded context and each external system  being considered a separate bounded context.  Evans suggests that when we identify a context boundary we should use anti-corruption layers to prevent the contexts from leaking through to one another.  An anti-corruption layer often takes the form of an adapter that translates and isolates the external context from our code.  This has two very valuable advantages; first, it prevents our code from having to speak the language of the other context (except within the adapter itself); and second, it isolates our code from changes to the other context.  If the other context makes dramatic changes to it's interface, we have no need to panic because that code is not sprinkled throughout our bounded context, it is isolated to the anti-corruption layer.

With that understanding, I would like to address some questions that come up frequently when doing TDD (test-driven development):

Why do we create small classes and break up complex tasks among multiple objects?  It is NOT because it makes it easier to write isolated unit tests and it is NOT because it solves the "how do I test private methods?" question. We do it because classes who do too much work cause spaghetti-like coupling, and maintaining the large class and all the classes that interact with it, becomes tiresome and error-prone.  (Single Responsibility, Interface Segregation and Open/Closed Principles)

Why do we inject dependencies and depend on abstractions (interfaces), rather than concretions (classes)?  It is NOT because it makes our code easier to unit test even though it is nearly impossible to unit test code that depends on concretions.  We depend on abstractions because it makes our code much more loosely coupled and therefore much easier to change without negative consequences. (Dependency Inversion Principle)

Why do we wrap third-party dependencies in adapters?  It is NOT because it's hard to isolate the third-party dependencies from our tests.  We do it because third party dependencies, especially those over which we have little or no control, are subject to change and we want to isolate our code from that change.  We also do it because sometimes those third parties do not believe in the same principles, such as SOLID, that we do, and creating an adapter allows us to adhere to the principles we hold dear. (Anti-Corruption Layer)

When we see code that does not adhere to our principles, we need to stop arguing that the reason to change them is for the sake of unit testing -- because, frankly, that's a tough argument to defend.  We first need to understand the reasons why we believe what we do, and "because it makes it unit testable" shouldn't be a primary reason.  Furthermore, when we believe in a core principle like those above, it should guide everything we do and arguments like "It's too hard in this case" or "It doesn't really apply here because I have a simpler way to do it even though it violates my principles" should rarely if ever be valid arguments.  When we find ourselves making arguments like those, and we really think about the reasons why we believe in these principles, we will almost always find that the principle still applies even in complex (or very simple) cases.

The good news is that TDD, done right, helps us to realize when we are violating our own principles.  In my experience when I've run into a hard-to-unit-test piece of code, it has always been because of a violation of one of the above principles.  The failure, then, is not a failure of our testing tools, it is our failure to adhere to these good coding principles.

And this is why I so dislike moles as test doubles; and it is my biggest beef with the VS11 testing framework which includes moles (they're called shims in the VS11 framework).  When you use moles (or shims) in your unit-testing framework, it keeps you from recognizing that you have poorly-written code that does not adhere to principles like those above.

A classic example of how the VS11 testing framework is going to be widely misused can be found in Rich Czyzewski's blog post, Noninvasive Unit Testing in ASP.NET MVC4 - A Microsoft Fakes Deep Dive.  I understand the temptation to want to solve problems like those Rich discusses using moles and I think his post was very clear and well thought out.  Using moles to test (or test-drive) poorly designed code can be easier (up front) than writing the code correctly, especially when working with legacy code.  But, I fundamentally disagree with his assertion that KISS (keep it simple, stupid) and YAGNI (you ain't gonna need it) should be used to dismiss such fundamental principles as SOLID and context boundaries.  Even Peter Provost, a Visual Studio program manager lead at Microsoft, disagrees with using shims in this manner as he lays out in his post on shims.

Our industry is young and evolving.  We need to be sure, as we evolve, that we are basing our evolution on sound principles, not just on what makes our daily jobs less frustrating up-front, especially, when it causes long-term maintainability consequences.  What do you think?

Monday, April 30, 2012

The Err of Business Analysts

First, let me say, I'm sorry to the Business Analysts (BAs) who may stumble upon this post, I'm sure you will have lots of disagreements with my assertions.

I read this article on InfoQ this morning and it made me start thinking...again...about business analysts. I'm not particularly interested in the argument over the terminology of business analyst vs. business architect, but, if we're going to accept Malik's definition of those two roles, I'll take a business architect over a business analyst any day. Personally, the term business analyst makes me shudder.

I've been a developer for about 18 years and I have worked in a number of different IT organizations including: small and large shops both with and without BAs and with BAs who held different types of roles. In my many adventures, I have come to very much dislike the role of business analysts, especially IT-based business analysts. Again, sorry to some of my friends who fit in this category--it's not you I dislike, it's the role. And the same holds true for business-based analysts depending on what their true role is. The real problem that I have seen with BAs comes down to Customer Affinity. You really do want your developers to be interested in and excited about "the business, it's processes and rules". The more involved and educated your developers are about the business domain and vision, the better prepared they will be to design software that fits that vision. I thought Eric Evans said it well, when he stated in Domain Driven Design, "...if programmers are not interested in the domain, they learn only what the application should do, not the principles behind it. Useful software can be built that way, but the project will never arrive at a point where powerful new features unfold as corollaries to older features."

The problem with introducing BAs is that they are roadblocks to customer affinity for the developers. The typical reason for creating a business analyst role is at the root of the very problem. Typically a business analyst is used to do a lot of the research work to understand the business and then to document the requirements so that the developers can focus on development -- essentially the same as saying, "let's not waste the developers' time learning the business vision so they can focus on developing the requirements." Unfortunately, there is intrinsic value in the work required to understand the business and the best way--perhaps the only way--to find that value is in the back-and-forth discussion that has to occur to translate business vision and needs into software. Unfortunately that value gets lost in communication, even by the most meticulous and skilled BAs. Even the use of BAs in an agile environment (where face-to-face communication is valued over meticulous documentation) leads to failure. The in depth discussions that result in the back-and-forth questions and answers yields value to the BAs but gets lost in translation to the developers. No amount of communication with the BA will ever make up for the lost opportunity to learn the business vision from the business experts.

This is because of the communication gaps that we all must jump. If you're not familiar with the term "communication gaps", I'd recommend taking a look at that link and in fact the whole section on communication in Alistair Cockburn's book Agile Software Development. Communication is difficult, we all git it wrong, all the time. We can never fully be sure that someone else truly understands our point of view or that we have fully understood another's. When you inject a business analyst between a business expert and a developer you now have to jump two communication gaps, business to BA and BA to developer, greatly increasing the risk of loss of fidelity and almost ensuring a loss of vision. Further more, rather than shortening those communication gaps, over time, between the business and the developers you end up widening them in some cases as miscommunication is increased.

The problem with BAs is that they are responsible for understanding the vision for the business from the domain experts rather than being responsible for the vision itself. This is why IT-based BAs are the worst type of BA -- in my opinion. They have IT responsibilities, not business responsibilities. They fall squarely into this pitfall of the double communication gap. Only slightly better than the IT-based BAs are business-based BAs (BAs who report through the business management, not through IT) whose primary responsibility is to communicate with IT. This is because their primary responsibility is still going to be trying to understand the vision from the domain experts rather than being a domain expert themselves.

Of course, the best scenario would be a scenario where the business owner and the software developer are the same person. Then you have zero communication gaps. But scenarios like that are not very common. And this leads me to believe the best scenario is to not have BAs at all. Unless BA stands for Business Architect and you accept Malik's definition of that term. I don't care what term we use, but the optimal scenario occurs when business departments hire individuals who are responsible for driving the success of that department (or parts of that department)--which inevitably means developing a vision, current and future, of what success means for that department. And then, secondarily, this person is also responsible for helping IT to understand that vision. Helping IT understand that vision should be the smaller portion of their job (although a very important one). It is requisite that those filling this role are very capable communicators, but that value is lost if they are not primarily responsible for the success of their department.

Customer affinity is a tremendous value, and in my experience business analysts, even great ones, are an obstacle to customer affinity.

On a side note, this is one of many reasons why I love working at Pluralsight. The developers work directly with the Partners who hold the vision, current and future, of Pluralsight. Much of that is funneled through one of the partners, Keith Brown, who has been involved in the code base. And so, not only does he have the business vision, he also understands the code -- in some cases better than the developers. It is a unique and beneficial blend, and further, if we have any questions we are free to speak with anyone in the company, including the partners, for any clarification. It allows me to get excited about the business and enjoy my work more and it gives me greater confidence that I am writing the software the business wants.

Thursday, April 26, 2012

Customer Affinity

I liked Martin Fowler's article on Customer Affinity that he recently retreaded. I've worked places before where the developers were not allowed to talk to the customers and were forced to work through BAs. I learned then that business analysts in general are an obstruction to Customer Affinity unless those analysts are embedded within the department they represent (i.e. they are domain experts, not IT liaisons). Really liked his point that developers should not be measured by the frameworks and algorithms they know, but in their ability to understand and get excited about the business domain.

Friday, April 13, 2012

Simplicity by Edgsger Dijkstra

I love this quote from Edgsger Dijkstra (1972 winner of the Turing Award):
"Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better."


Friday, March 16, 2012

VS11 Fakes Framework

It seems to me that Microsoft still doesn't quite understand the serious TDD culture.  I recently saw an InfoQ article entitled VS11 Gets Better Unit Testing Tools, Fakes Framework.  My assumption is that the word "better" in that title applies only to the unit testing tools, not the fakes framework.

Although, I like that Microsoft has, for a while now, recognized the existence of the testing community and that they are trying to improve their tools to fit that need, it seems that they often miss important details.  Admittedly,  this is beta, and I only know as much about their fakes framework as I've read in that article and links from that article, however, it seems from what I read that they are not including mocks in their fakes framework.  They have stubs, and what they are calling "shims".  Shims essentially allow you to provide delegates to operate in place of the actual method calls on the object you are faking, so kinda like stubs but more like a fake since you can actually provide a method that executes so you can do more stuff than just specify a return value. This would be great, except that it is completely unnecessary and I think it is going to lead to some awful, spaghetti-like test code.

The other problem I see with shims, if I am understanding how they are implemented, is that you use Visual Studio to generate your shim classes.  On a side note, I was very happy to see they are deprecating private accessors, but it seems that shims are going to have a lot of the same problems. For example, if I am coding against a shim and I decide I don't like the name of a method on the object I am shimming, I'll want to refactor it right there.  I fear that, just like private accessors, if I refactor a shim, I am only refactoring the shim, not the real class, and so shims are going to be an obstruction to refactoring, just like private accessors are.

If Microsoft wants to include a fakes framework in VS, I don't understand why they don't embrace what is already working well in the TDD community.  Fakes frameworks such as RhinoMocks and Moq already give you what you need and they both embrace the Arrange-Act-Assert approach which leads to much cleaner tests.  With Arrange-Act-Assert you setup your test at the top of your test (arrange), then make the call to the system under test (act), and then you do all your assertions.  By not including mocks, the VS11 fakes framework will cause people to have to do assertions as part of the arrange phase of their tests as suggested in the InfoQ article when the author stated, "Mocks are missing, but you can do assertions within the stub method implementations."


Perhaps the most frustrating thing about this new fakes framework is that we will have lots of people, who are earnestly trying to learn how to do TDD, and they will learn that shims are the right thing to do and they will have no understanding of a true mock.   That means that, even though I can just ignore the existence of the new fakes framework myself,  developers everywhere will start doing things the Microsoft way and someday I'll have to work on some code somewhere that uses shims instead of something more reasonable because they have embraced the new framework.


In my opinion, this is another failed Microsoft attempt to emulate what the rest of the development world has already figured out.

Tuesday, March 13, 2012

Code Simply

I've been coding for a quite a while.  Unfortunately, I spent a lot of my career not realizing I was writing bad code.  A few years ago, I think I started to learn what it really means to write good code (of course, I thought that at the beginning of my career too).  This happened when I began working on my first real agile project.  The methodology was XP.  I started learning about things like loose coupling, dependency injection, test-driven-development, etc...things that began to help me write more simplified code.  Having said that, I think I've always had a propensity to try write the simplest code that could possibly work; I just didn't always know how.


Of course, I've written code that started out simple and when I was done I looked back and thought it was a mess.  And, admittedly, I still do that.  It seems inevitable that today's pride will always be tomorrow's embarrassment.  But I do feel I'm doing better.


All around me I see examples of things I could do better, or at least, when I open my eyes that's what I see.  There are some things that I believe in that have become guiding principles, things like loosely coupled classes, test-driven development, preferring composition over inheritance, the simplest design that could possibly work, etc.  And there are other things that seem like a good idea and that I hope are right, things like Domain Driven Design, for example.  And of course, there are more concrete things that I recognize I could learn more about, like when Mike Clement (@mdclement) tried to melt my mind recently in his session "Linq (From the Inside)" at Utah Code Camp.


The more I learn, the more I realize how much more there is to learn. Some things, when you hear them, seem obvious and simple, others can be quite complex and difficult to understand at first. The interesting thing is, learning complex and difficult things about writing good software doesn't lead to more complex code, it leads to simpler code.  The more we learn, the more we are empowered to create clean, simple code.  The measure of a good developer is not his (or her) ability to create impressive frameworks and complex code; it is instead his ability to identify the most simple solution to a given problem that truly delivers all the desired features to the user.  I hope to be able to learn to do that better.


I've found that as I learn, I want to share, and that has led me to create this blog.  


It can be my chronicle of today's proud moments turned into tomorrow's embarrassments published to forever live in infamy on the interwebs.