DCI and the Open/Closed Principle

The open/closed principle (OCP) is a fundamental "run of thumb" in object-oriented languages. It has hands in proper inheritance, polymorphism, and encapsulation amongst other core properties of object-oriented programming.

The open/closed principle says that we should refine classes to the point at which we eliminate churn. In other words, the less times we need to open a file for modification, the better. With DCI, we can compose objects while still following OCP.

Extension is Inheritance

Wikipedia's definition of inheritance:

Inheritance is a way to reuse code of existing objects, or to establish a subtype from an existing object, or both, depending upon programming language support.

By using #extend to modify objects at runtime, we are both reusing code from data objects while also forming a new subtype of the data object.

A typical DCI context might look like this:

app/use_cases/customer_purchases_book.rb

customer = User.find(1) # customer is a data object
customer.extend(Customer) # inject the Customer role
customer.purchase(book) # invoke Customer#purchase

After calling #extend, the user object can be used as both a data object and a purchasing customer. The #purchase method likely uses attributes of the user object to create joins between him and his book. We're reusing code from its former self.

Similarly, the new object is now a subtype of its former self. That is, the Customer version of user can be used polymorphically in place of the data object itself.

The Open/Closed Principle (OCP)

The open/closed principle is often discussed in the context of inheritance; we use inheritance to adhere to the "closed" aspect of OCP. In order to follow OCP, a class can be open for extension, but closed for modification. Let's look at how the principle could be applied with classical inheritance to reimplement the above scenario.

We have a dumb data object:

class User
  # A dumb data object
end

To abide by the "closed" aspect of OCP, we define a subtype of the User class; we do not modify the class itself:

class Customer < User
  def purchase(book)
    # Update the system to record purchase
  end
end

Somewhere else in our codebase, we tell a user to purchase his book:

customer = Customer.new
customer.purchase(book)

This is great, we've accomplished OCP by ensuring that any customer related aspects of a User are neatly tucked away in the Customer class. In order to change the behavior of a user, we formed a new class while leaving the User class alone.

This is the guts of the open/closed principle. We want to structure our classes in such a way as to ensure they never need to change. Guaranteeing the classes don't change is also a function of the method bodies.

DCI and OCP

Following OCP without incorporating the Data, Context and Interaction architecture has proven to lead to looser coupling, stronger encapsulation, and higher cohesion. Just apply it and your world will be rainbows and ponies!

Wrong. While OCP has absolutely helped in producing higher quality code, it's just another lofty object oriented principle. It's very difficult to adhere to all principles, and some may be entirely inappropriate in various scenarios. The SOLID principles (of which OCP is one) are a great frame-of-reference when discussing software design, however, heeding to them 100% of the time is frankly, impossible.

I often find it very difficult to ensure the first iteration of my core software, test suite, and ancillary code meet the qualifications of the SOLID principles. Not because I don't understand or refuse to apply them, but because I'm human and I'm working with frequently-varying business rules. Principles in general end up being this pie-in-the-sky goal; I prefer to just write software.

One of the reasons I love DCI so much is because it forces you to work in an orthogonal way. It breaks the cemented programming models we've seen for over 20 years. Models which, in my opinion, do not lend themselves towards these principles. DCI acts much like a lighthouse: guiding you towards proper object orientation.

DCI enables you to automatically apply many best practice principles in object oriented programming. The open/closed principle is one.

Closed for Modification

The whole point of DCI is to decouple what changes from what remains constant. In DCI, our data objects are strictly persistence related, and as such, do not change frequently. The way in which we use data objects is often what changes.

So, when we build out a data object...

app/models/user.rb

class User < ActiveRecord::Base
  # A dumb data object
end

...it's closed for business.

DCI tells us that if we want to add behavior to this class, we should be doing so within a role. A deliberate effect of this is that our class remains closed. OCP is telling us to optimize our classes so that we never need to modify them. This aspect of OCP is baked into the core of DCI.

Open for Extension

The name says everything. The best way to accomplish DCI in Ruby is to use #extend. We seek to inject roles into objects at runtime to accomplish our behavioral needs. Let's create our Customer role:

app/roles/customer.rb

module Customer
  def purchase(book)
    # Update the system to record purchase
  end
end

We would then join our data and roles within a context:

app/use_cases/customer_purchases_book.rb

class CustomerPurchasesBook
  def initialize(user, book)
    @customer, @book = user, book
    @customer.extend(Customer)
  end

  def call
    @customer.purchase(@book)
  end
end

The open/closed principle states that a class should be open for extension. Within the above context, we extend our user object with the Customer role. Our DCI code adheres to this rule.

OCP talks a lot about extension of classes via inheritance. Demonstrations of OCP are usually forged with classes, instead of objects. In the above paragraph, I say that classes are open for extension, but the user object is extended. When we define a class, it's simply a container in which methods live. That container then becomes part of an object's lookup hierarchy. So, behaviorally speaking, there's no semantic difference between composing an object from scratch with DCI and creating an instance of a class.

In the customer example above, we use #extend as a means of composing the customer object to include its necessary behavior. We do this in lieu of classical inheritance. As I mentioned earlier in this article, extension is inheritance.

The Silver Bullet

By applying DCI, you are ever-so-nicely nudged into following OCP. DCI is a paradigm shift, but it's coated with reward. By simply working in objects and extending them at runtime, you are guided towards many well-respected, object-oriented principles. The strong emphasis DCI puts on decoupling static classes from dynamic behavior means that your classes remain closed for modification.

DCI contexts are naturally built for OCP. Use cases rarely change. If a user is buying a book, the use case of that purchase remains relatively constant. Since contexts act as simple glue between data and roles, if a use case changes, it's likely to be a new context. In this regard, contexts remain closed for modification.

DCI won't help you properly construct your roles, but it does guide you in the right direction. Since roles are actor-based, their methods tend to be use case specific. This means that role methods don't need to accomodate for drastic variations. If variation increases, I tend to reach for service objects to abstract that complexity.

There is no silver bullet to following object-oriented principles. We're always making tradeoffs. Managing complexity is inherently complex. DCI can help you cope by ensuring your objects remain open for extension, yet closed for modification.

Posted by Mike Pack on 12/18/2012 at 08:14AM

Tags: dci, oop, solid, ocp


The First Step to Applying Design Patterns: Don't

Design patterns are awesome. The more we build software with them in mind, the better off we'll be as a community. They can help us elegantly construct solutions which can be readily discussed with peers. They're common solutions to common problems. They're not just common solutions, however. They're battle tested, proven, performant and generally considered "the best" solution. Design patterns are the apotheosis, the epitome, of solution.

In this article, I'll look at varying levels of design pattern application, starting from worse to better, and ultimately landing on what I would consider the utopia of software engineering. The ideas in this article are largely derived from what I've observed, devoted and reasoned about.

Working from Scratch

Personally, one of the most compelling exercises in software engineering is exploration. Just like in any other engineering field, the problem set expands indefinitely, and thusly, our solution set. As businesses strive to keep a competitive edge, engineers must continue to solve problems which are both new and challenging. Through the process of solving new problems, we manage to come up with some not-so-pleasant-to-work-with solutions. Doing so is natural and healthy and is just about the only way we can continue to improve, especially when first learning. In fact, we're going to create one of those not-so-good solutions right now.

Take, for example, a simple arithmetic problem:

1 + 1 = 2

Imagine modern programming languages didn't have a + operator. Knowing the result is 2, how would you prove the Left Hand Side (1 + 1)? Well, let's briefly explore one option. For all intents and purposes, the following could be written in pseudocodde. It's not the running code that matters, it's the exploration process which invokes an active mind.

Let's stick to Ruby idioms and define a class, with a method, +:

class Fixnum
  def +(other)
    if self.value == 1 and other.value == 1
      2
    end
  end
end

Ruby's + oprator aids in this process, but let's call the + method directly:

1.+(1) #=> should == 2

Nothing tricky going on here. What if we want to evaluate 1 + 2? The most obvious thing is to add some conditional branching to our + method:

class Fixnum
  def +(other)
    if self.value == 1 and other.value == 1
      2
    elsif self.value == 1 and other.value == 2
      3
    end
  end
end

This is where our solution starts to fall apart. While this would work with a minimal set of operands, as our set grows, our conditional logic grows linearly, if not exponentially. At this point in the exploration process, we probably want to reconsider our solution. It's easy to recognize this first iteration is heading down the wrong path. Given some background in computer science, you might try refactoring this solution to use binary instead.

Feel free to skip the following code, it's not the destination that matters, but the journey by which we got there. Here's our final binary addition code:

class BinaryPlus
  def initialize(first, second)
    @first, @second = first, second
    # to_s accepts a base to convert to. In this case, base 2.
    @first_bin  = @first.to_s(2)
    @second_bin = @second.to_s(2)
    normalize
  end

  def +
    carry = '0'
    result_bin = ''

    @max_size.times do |i|
      # We want to work in reverse, from the rightmost bit
      index = @max_size - i - 1
      first_bit, second_bit = @first_bin[index], @second_bin[index]

      if first_bit == '1' and second_bit == '1'
        result_bin << carry
        carry = '1'
      else
        if first_bit == '1' or second_bit == '1'
          if carry == '1'
            result_bin << '0'
            # carry remains 1
          else
            result_bin << '1'
            carry = '0'
          end
        else
          result_bin << carry
          carry = '0'
        end
      end
    end

    # Is there still a carry hangin' around?
    result_bin << '1' if carry == '1'

    result_bin.reverse.to_i(2)
  end

  private

  def normalize
    # We want both binary numbers to have the same length
    @max_size   = @first_bin.size < @second_bin.size ? @second_bin.size : @first_bin.size
    @first_bin  = @first_bin.rjust(@max_size, '0')
    @second_bin = @second_bin.rjust(@max_size, '0')
  end
end

For which we would call with:

BinaryPlus.new(3, 4).+ #=> should == 7

At this point, we've managed to weave our way through a forest of solutions to land on one that doesn't require us to change the code to accommodate new operands. Aside from increasing the maintainability of the code, going through this process has likely taught us a few things about doing basic arithmetic in Ruby:

  • The underlying principles are not as simple as the syntax leads us to believe.
  • Considering potential operands is likely something we should do before writing code (TDD).
  • Ruby has built-in methods for base conversion.
  • (The list goes on depending on the explorer.)

This process is both fruitful and enlightening. It's one of beauty and purity. Only by actually solving a problem can we truly say we've conquered it. This is the sensation I seek every day. That of utter accomplishment. This is software engineering, and only through time can be become better at finding maintainable solutions.

There's a catch. I'm not the best problem solver in the world, and neither are you. Individually, we simply can't grasp the vast landscape of problems, much less solve them all enough times that we can confidently say we have the best solution. Collectively, we all strive for the best solutions and combine our results. It's called the Gang of Four, not the Gang of One.

Aside: For those interested in the actual source of Fixnum#+, check out the Ruby source. Also, more on binary arithmetic.

Applying Design Patterns

We live in a beautiful age where all problems are already solved. We can thank Leonard Euler, Carl Guass and Isaac Newton for advanced mathematics and forming the foundation of computer science. We can thank Ewald Christian von Kleist, Benjamin Franklin and Alessandro Volta for their work in electricity so we can program on the airplane. We can thank Alan Turing and Donald Knuth for modern computer science and Dennis Ritchie for C. We can thank Matz for Ruby. And we can thank the Gang of Four for design patterns.

Design patterns help us do one thing really well: think and speak in the abstract. Given a problem with input i1, i2 and i3, design patterns can help us elegantly solve such a problem by correct association of i1, i2 and i3. They're generic solutions to generic problems. By its very definition, a (software) pattern is a theme of recurring solutions. The primary benefit of using patterns is we can circumvent a large degree of work. We no longer have to reinvent the "undo" button, the command pattern has already been discussed and documented.

It's very easy to apply design patterns. All you have to do is know they exist. If I know the factory pattern exists and it's a tried-and-true technique of generating objects, all I have to do is research the pattern and follow the steps. With the resources available to us today, we can read about common problems and their resolutions with a single google. There's really no excuse for not applying design patterns at work every day. They can drastically simplify code, increase modularity, increase legibility, decrease duplication, improve translation to English, and the list goes on.

If I had to draw a conclusion right now, I would say "use design patterns." But I don't, so I would rather say something a little more robust.

Design patterns can be extremely helpful in crafting beautiful code, but the way in which they're applied often determines their usefulness. Applying a design pattern in the wrong scenario can push you into a corner, ultimately leading to more disarray than would have been present if it weren't for the design pattern. I'm going to pick on the singleton pattern a bit.

Singletons get a bad rep. In my opinion, rightfully so. Let's look at a situation where "applying a design pattern" can be discouraging.

We only have one file system, right? Naively, I'm thinking, "I know the singleton pattern, that would be a great fit here!" Let's create a file system singleton that writes some text to /dev/null:

require 'singleton'

class DevNullSingleton
  include Singleton

  def write(text)
    File.open('/dev/null', 'w') do |file|
      file.write text
    end
  end
end

We can use the singleton by referencing its instance:

DevNullSingleton.instance.write('Something to dev null')

Realistically, this has the same semantics as setting a global constant if we didn't want to use Ruby's singleton library:

DEV_NULL = DevNullSingleton.new
# ... later in the code ...
DEV_NULL.write('Something to dev null')

So, now our application grows, and we need another file system writer that outputs to /tmp. We're posed with a few options.

We can rename our singleton and allow the #write method to accept a path. The API looks like this:

FileSystemSingleton.instance.write('/dev/null', 'Something to dev null')
FileSystemSingleton.instance.write('/tmp', 'Something to tmp')

This is bad. If we want to write numerous things to /dev/null, we have a large degree of duplication:

FileSystemSingleton.instance.write('/dev/null', 'Something to dev null')
FileSystemSingleton.instance.write('/dev/null', 'Something ele to dev null')
FileSystemSingleton.instance.write('/dev/null', 'Another thing to dev null')

Alternatively, we can create a new singleton class that writes to /tmp:

class TmpSingleton
  include Singleton

  def write(text)
    # ...
  end
end

But now, every time we want to write to a different location on the file system, we need to create a new singleton class. Not great, either.

Probably, the better option is to break ties with the singleton and start instantiating classes normally:

class FileSystem
  def initialize(path)
    @path = path
  end

  def write(text)
    File.open(@path, 'w') do |file|
      file.write text
    end
  end
end

Now, when we want to write multiple times to /dev/null, we instantiate only once and use it as we would any other class:

dev_null = FileSystem.new('/dev/null')
dev_null.write('Something to dev null')
dev_null.write('Something ele to dev null')
dev_null.write('Another thing to dev null')

Here's My Gripe

I don't have anything against the singleton pattern, per se. I have issues with the process by which it's applied. In a number of cases, I've seen design patterns applied in the following steps:

  1. Read up on design patterns.
  2. Think in terms of design patterns.
  3. Apply design patterns.

It's really awesome to read as much as possible, but things start to fall apart around Step 2. If design patterns become the only lens by which you see your software, you'll inevitably end up pigeonholed like the singleton situation above.

Don't name your designs after patterns. This often happens because early in the design process you say, "I can use a singleton here!" So you go about defining classes as you're elaborating the design. Early, it makes sense to name something "FileSystemSingleton" so you can follow the design as it's being built. It acts as a form of documentation. However, it does that, and only that. "FileSystemSingleton" is no more descriptive or expressive than "FileSystem." In fact, it just adds noise. If you name something "BubbleSortStrategy" to denote the strategy pattern, but later compositionally apply subsequent "strategies", is it still technically a strategy? Is it a component of an overall strategy? Drop the "Strategy" and just call it "BubbleSort." That way, no matter whether your design is in fact the strategy pattern, a derivation thereof, or something completely different, it doesn't add clutter or confusion.

Don't design around patterns. Although it would be nice, we can't trust design patterns as the correct solution. For a majority of patterns, I would speculate that only a small amount of problems fit directly in the mold. In the above example, the singleton is not what we ultimately needed. If we hadn't been thinking "singleton, singleton, singleton" early in the design process, we probably wouldn't have ended up with that design. If we had taken a TDD approach to building out the file system writer, we would have likely just ended up with a normal Ruby class, no singletons involved. As software grows and changes, don't get pigeonholed by a design pattern.

Learning from Design Patterns

In the previous section on Applying Design Patterns, I said that all problems have been solved. This is, of course, not true. One of my primary fuel sources is solving problems that neither I've solved nor have I seen solved. That's not to say they haven't been solved, however. My problems are not unique snowflakes. The difference between normal problems and problems which can be readily solved with design patterns is a matter of exposure. We're not exposed to problems for which we've never seen, and therefore we do not readily have a solution. We must compose our own.

When I encounter new problems, I never think in terms of design patterns. I often think in terms of domain. My utopic engineering process consists of a boundless array of knowledge from which I comprise my own solution. I don't rely on one tool, methodology, or process to drive my software. I consume the problem and attempt to make educated decisions. This is the "engineering" part of software engineering. It's not the languages you know, the frameworks you use, or how retina-enabled your computer is. It's your ability to become completely engulfed in a problem, enough to sense its anatomy.

Take, for example, a recent project of mine: Pipes. Pipes evolved organically through deep discussion around the domain. Why does the probem exist, what are the currently known solutions, and how can we derive the best possible outcome? The question of "what design pattern should we use" never arrose. Design patterns should always be at the forefront of the discussion, however. Some of the architectural motivation was taken from the pipeline processing pattern. Studying the pipeline processing pattern evoked new ideas for which to draw transient conclusions. Ultimately, it was the exploration process combined with studying the pipeline processing pattern that lead to a solution I was happy to write home about.

Be a part of the exploration process. Discover how your solution fits into your domain, and your domain into your problem. It's more time consuming than jumping to a cookie-cutter solution, but it's lightyears more glorifying. The exploration process is what leads to interesting and eloquent implementations; ones that can be easily changed, apply to the domain, and have a dash of humanism. No matter how you program, being cognizant of design patterns is always desirable. Learning as much as possible and having varying perspectives is crucial.

Create your own design patterns. Solve problems how you would solve them, not how the Gang of Four solves them. Stay as well-informed as you can on known solutions and reflect on them regularly. Use design patterns as inspiration for better, more applicable solutions to your specific problem. Do not blindly apply them. Think first, then consider design patterns. This is software engineering.

Posted by Mike Pack on 10/02/2012 at 09:10AM

Tags: design patterns, ruby


DCI with Ruby Refinements

TL;DR - Have your cake and eat it too. Ruby refinements, currently in 2.0 trunk, can cleanly convey DCI role injection and performs right on par with #include-based composition. However, there's some serious caveats to using refinements over #extend.

Recently, refinements was added to Ruby trunk. If you aren't yet familiar with refinements, read Yahuda's positive opinion as well as Charles Nutter's negative opinion. The idea is simple:

module RefinedString
  refine String do
    def some_method
      puts "I'm a refined string!"
    end
  end
end

class User
  using RefinedString

  def to_s
    ''.some_method #=> "I'm a refined string!"
  end
end

''.some_method #=> NoMethodError: undefined method `some_method' for "":String

It's just a means of monkeypatching methods into a class, a (still) controversial topic. In the above example, the User class can access the #some_method method on strings, while this method is non-existent outside the lexical scope of User.

Using Refinements in DCI

Refinements can be used as a means of role-injection in DCI, amongst the many other techniques. I personally like this technique because the intention of the code is clear to the reader. However, it has some serious drawbacks which we'll address a bit later.

Let's say we want to add the method #run to all Users in a given context.

Our User class:

class User; end

Our refinement of the User class:

module Runner
  refine User do
    def run
      puts "I'm running!"
    end
  end
end

In the above refinement, we are adding the #run method to the User class. This method won't be available unless we specifically designate its presence.

Our DCI context:

class UserRunsContext
  using Runner

  def self.call
    User.new.run    
  end
end

Here, we're designating that we would like to use the refinement by saying using Runner. The #run method is then available for us to use within the context trigger, #call.

Pretty clear what's happening, yeah?

I wouldn't go as far as saying it carries the expressiveness of calling #extend on a user object, but it gets pretty darn close. To reiterate, the technique I'm referring to looks like the following, without using refinements:

user = User.new
user.extend Runner
user.run

Benchmarking Refinements

I'm actually pretty impressed on this front. Refinements perform quite well under test. Let's observe a few of role injection: using inclusions, refinements and extensions.

I ran these benchmarks using Ruby 2.0dev (revision 35783) on a MacBook Pro - 2.2 GHz - 8 GB ram.

Check out the source for these benchmarks to see how the data was derived.

#include (source)

Example

class User
  include Runner
end

Benchmarks

> ruby include_bm.rb
         user       system     total       real
include  0.560000   0.000000   0.560000 (  0.564124)
include  0.570000   0.000000   0.570000 (  0.565348)
include  0.560000   0.000000   0.560000 (  0.563516)

#refine (source)

Example

class User; end
class Context
  using Runner
  ...
end

Benchmarks

> ruby refinement_bm.rb
        user       system     total       real
refine  0.570000   0.000000   0.570000 (  0.566701)
refine  0.580000   0.000000   0.580000 (  0.582464)
refine  0.570000   0.000000   0.570000 (  0.572335)

#extend (source)

Example

user = User.new
user.extend Runner

Benchmarks

> ruby dci_bm.rb
     user       system     total       real
DCI  2.740000   0.000000   2.740000 (  2.738293)
DCI  2.720000   0.000000   2.720000 (  2.721334)
DCI  2.720000   0.000000   2.720000 (  2.720715)

The take home message here is simple: #refine performs equally as well as #include although significantly better than #extend. To no surprise, #extend performs worse than both #refine and #include because it's injecting functionality into objects instead of classes, for which we have 1,000,000 and 1, respectively.

Note: You would never use #include in a DCI environment, namely because it's a class-oriented approach.

Separation of Data and Roles

What I enjoy most about the marriage of refinements and DCI is that we still keep the separation between data (User) and roles (Runner). A critical pillar of DCI is the delineation of data and roles, and refinements ensure the sanctity of this concern. The only component in our system that should know about both data and roles is the context. By calling using Runner from within our UserRunsContext, we've joined our data with its given role in that context.

An example of when we break this delineation can be expressed via a more compositional approach, using include:

class User
  include Runner
end

The problem with this approach is the timing in which the data is joined with its role. It gets defined during the class definition, and therefore breaks the runtime-only prescription mandated by DCI. Furthermore, the include-based approach is a class-oriented technique and can easily lead us down a road to fat models. Consider if a User class had all its possible roles defined right there inline:

class User
  include Runner
  include Jogger
  include Walker
  include Crawler
  ...SNIP...
end

It's easy to see how this could grow unwieldy.

Object-Level Interactions and Polymorphism

Another pillar of DCI is the object-level, runtime interactions. Put another way, a DCI system must exhibit object message passing in communication with other objects at runtime. Intrinsically, these objects change roles depending on the particular context invoked. A User might be a Runner in one context (late for work) and a Crawler in another (infant child).

The vision of James Coplien, co-inventor of DCI, is tightly aligned with Alan Kay's notion of object orientation:

“I thought of objects being like biological cells and/or individual computers on a network, only able to communicate with messages.” - Alan Kay

 

So, as roles are injected into data objects, do refinements satisfy the object-level interactions required by DCI? Debatable.

With refinements, we're scoping our method definitions within the bounds of a class. With modules, we're scoping our methods within the abstract bounds of whatever consumes the module. By defining methods within a module, we're essentially saying, "I don't care who consumes my methods, as long as they conform to a specific interface." Further, in order to adhere to Alan Kay's vision of object orientation, our objects must be dynamically modified at runtime to accommodate for the context at hand. The use of modules and #extend ensures our data objects acquire the necessary role at runtime. Refinements, on the other hand, do not adhere to this mantra.

Along similar lines, let's look at how refinements affect polymorphism. Specifically, we want to guarantee that a role can be played by any data object conforming to the necessary interface. In statically-typed systems and formal implementations of DCI, this is particularly important because you would be defining "methodless roles", or interfaces, for which "methodful roles" would implement. These interfaces act as guards against the types of objects which can be passed around. When we work with refinements and class-specific declarations, we lose the polymorphism associated with the module-based approach. This can be conveyed in the following example:

module Runner
  def run
    puts "I have #{legs} and I'm running!"
  end
end

# The Runner role can be used by anyone who conforms to
# the interface. In this case, anyone who implements the
# #legs method, which is expected to return a number.
User.new.extend Runner
Cat.new.extend Runner
Dog.new.extend Runner

# When we use refinements, we lose polymorphism.
# Notice we have to redefine the run method multiple times for each
# possible data object.
module Runner
  refine User do
    def run
      puts "I have #{legs} and I'm running!"
    end
  end

  refine Cat do
    def run
      puts "I have #{legs} and I'm running!"
    end
  end

  refine Dog do
    def run
      puts "I have #{legs} and I'm running!"
    end
  end
end

The really unfortunate thing about refinements is we have to specify an individual class we wish to refine. We're not able to specify multiple classes to refine. So, we can't do this:

module Runner
  refine User, Cat, Dog do # Not possible.
    def run
      puts "I have #{legs} and I'm running!"
    end
  end
end

But even if we could supply multiple classes to refine, we're displacing polymorphism. Any time a new data object can play the role of a Runner (it implements #legs), the Runner role needs to be updated to include the newly defined data class. The point of polymorphism is that we don't really care what type of object we're working with, as long as it conforms to the desired API. With refinements, since we're specifically declaring the classes we wish to play the Runner role, we lose all polymorphism. That is to say, if some other type, say Bird, conforms to the interface expected of the Runner role, it can't be polymorphically interjected in place of a User.

Wrapping Up

Refinements are a unique approach to solving role injection in DCI. Let's look at some pros and cons of using refinements:

Pros

  • #refine provides a clean syntax for declaring data-role interactions.
  • Refinements perform around 500% better than #extend in DCI.
  • The data objects are clean after leaving a context. Since the refinements are lexically scoped to the context class, when the user object leaves the context, it's #run method no longer exists.

Cons

  • We lose all polymorphism! Roles cannot be injected into API-conforming data objects at runtime. Data objects must be specifically declared as using a role.
  • We can't pass multiple classes into #refine, causing huge maintenance hurdles and a large degree of duplication.
  • We lose the object-level, cell-like interaction envisioned by Alan Kay in which objects can play multiple and sporatic roles throughout their lifecycle.
  • Testing. We didn't cover this, but in order to test refinements, you would need to apply the cleanroom approach with a bit of test setup. In my opinion, this isn't as nice as testing the results of method after using #extend.

While there's certainly some benefits to using refinements in DCI, I don't think I could see it in practice. There's too much overhead involved. More importantly, I feel it's critical to maintain Alan Kay's (and James Coplien's) vision of OO: long-lived, role-based objects performing variable actions within dynamic contexts.

After all this...maybe I should wait to see if refinements even make it into Ruby 2.0 .

Happy refining!

Posted by Mike Pack on 08/22/2012 at 09:13AM

Tags: dci, ruby, refinements