ValyJs Why?

New job, new application,new ideas!

After getting the hang of the application I am taking care of I star noticing the same code repeating. And after taking a look at some other web pages I see the same things repeating.

As a fair warning we are talking about javascript here so the work code is and will be used in its widest definition.

function foo(){
  var element = $("elementId");
  if(element != null){
    //do something
  }
}

or the classic one where an anonymous object is passed to a function and they check for each and every member of the object…

function bar(user){
  if(user.name){
    // use the users name
  }
  // dosomething else
  if(user.surname){
    // now it's safe to use the surname
  }
}

And now imagine that a million times over all the files that compose the client site part of the application.

Yes that makes Dejan quite unhappy ๐Ÿ˜ฆ

So this is where ValyJsย comes into play. I would not call it a complete product but I wrote up some helper functions that can aid those common scenarios.

Now you can write it like this.

function foo(){
  valy.executeIfExists({
    elementId : "elementId",
    func : function(element){
      // do your stuff here
    }
  });
}

 

Is this better? I think that it is more readable if nothing else. If nothing else it is consistent.

For the second scenario. We can check for all members up front.

function bar(user){
  valy.exists(user.name);
  valy.exists(user.surname);

  // do your stuff
} 

I to hope that this is better.

There is still a long way to go till I get this to version 1, currently I have it at version 0.1.1 and it is working.

What I need now is some creative input to get something that is actually usable.

So anyway you can go and check it out here:ย http://code.google.com/p/valjjs/

 

Advertisements

My little VS theme

For a lack of a better option I sat down and started surfing the new for something interesting when I stumbled upon bespin. Not only is it a nice little browser but the default color scheme immediately took me over. I must say, that I like it a lot.

So after playing with it for some time I decided to do a little color theme for visual studio and see if I can get the same feel there. To achieve this I used the fine services of studiostyles. It is not a perfect tool, there are some problems with using it outside of the IE, but still it is good.

Now that it is done I must say that I somehow like the new theme.

If you would like to get the new VS them you can download it here

VS theme sample

Bespin inspired VS theme

Building a .net solution using RAKE [part 2]

As begone in the previous post lets look at the RAKE script.

Preparing the build

The first thing the script has to do is to initialize the file structure it is going to work with later on in the script and there are some global variables to set. And before you look at the code this is the biggest task in the build script.

task :prepare do |taks, args|
 require 'fileutils'
 require 'ftools'
 FileUtils.rm_rf Variables::OUTPUT_PATH if File.exist? Variables::OUTPUT_PATH
 Dir.mkdir(Variables::OUTPUT_PATH)
 Dir.mkdir(Variables::SOURCE_PATH)
 Dir.mkdir(Variables::WIKI_PATH)
 Dir.mkdir(@RELEASE_PATH = "#{Variables::OUTPUT_PATH}/#{Variables::PROJECT_NAME}_#{args["version"]}")
 Dir.mkdir(@DOCUMENTATION = @RELEASE_PATH + "/Documentation")
 Dir.mkdir(@VALY = @RELEASE_PATH + "/Valy")
 File.copy('nunit.xsl', "#{Variables::OUTPUT_PATH}/nunit.xsl")
end

Before we can begin our build and copy bonanza we have to prepare some directories into which we want to copy to. Just to avoid all those nasty exceptions later on.

The first to lines “import” some useful things that we require for our IO operations. The fileutils is a IO class that comes with rake and proves extremely useful. So we create a few directories and set the required variables for later use. The last thing we do is to copy the nunit.xsl to the output directory.

If you are wondering what those Variables are then let me disappoint you. There are just constants defined in another file so they do not clog up the main rake script.

module Variables
  ## Project paths
  OUTPUT_PATH = "../Release"
  SOURCE_PATH = "#{OUTPUT_PATH}/Code"
  LIB_PATH = "../Sources/Lib"
  WIKI_PATH = "#{OUTPUT_PATH}/Wiki"

  ## Third party tools
  NUNIT_EXE = "#{LIB_PATH}/NUnit/nunit-console.exe"
  DOT_NET_PATH = "#{ENV["SystemRoot"]}\\Microsoft.NET\\Framework\\v3.5"

  ## Build properties
  CONFIG = "Debug"
  PROJECT_NAME = "valy"
end

Getting the sources from the repository

As mentioned before valy is hosted with Google code in a mercurial repository. To make my life (and the scrip) easier I have tortoiseHG installed locally.

So let’s take a look at the script that gets my sources from the repository

desc "Gets the sources from the valy repository"
task :get_sources => [:prepare] do
   Helpers.clone_repository("https://valy.googlecode.com/hg", Variables::SOURCE_PATH)
   Helpers.clone_repository("https://wiki.valy.googlecode.com/hg/", Variables::WIKI_PATH)
end

To make it easier the Helper module defines a method clone_repository that looks quite simple.

# Clones the given mercurial repository
def Helpers.clone_repository(repository_path, output_path)
  puts "Cloning repository #{repository_path}"
  unless system "hg clone #{repository_path} #{output_path}"
    puts "Clone FAILED"
    puts $?
  end
end

I must say that I like this piece of code. It’s simple and effective. I take advantage of the fact that I can do a system call to hg and it will get the repository for me. I am using the system command for this because it returns a boolean and in the case of failure stores the error message in the $? global variable.

And that’s it. Now you have the sources on your local system.

Building the solution

This is actually represented in two tasks:

  1. Set the version number in the assemblyinfo.cs
  2. Build the solution using MsBuild

The two tasks again look “quite” easy.

task :set_assembly_attributes => [:get_sources] do |task, args|
  Helpers.set_assembly_version "#{Variables::SOURCE_PATH}/Sources/valy/CommonAssemblyInfo.cs", args["version"] if args["version"]
end

desc "Compile solutions using msBuild"
task :compile=> [:set_assembly_attributes] do
  solutions = FileList["#{Variables::SOURCE_PATH}/**/*.sln"]
  solutions.each do |solution|
   Helpers.build_solution(solution)
  end
end

Both tasks depend on the Helpers module that we will look at later. For now lets concentrate on the task at hand ๐Ÿ™‚

The :set_assembly_attributes task is the only rake task (until now) that uses parameters represented by the args in the task block. So if the version is not given the default version will remain at 0.0.0.0. To illustrate lets look at some calls to the rake script:

c:\valy\build>rake version=1.0.0.1
=> dll version = 1.0.0.1

c:\valy\build>rake
=> dll version = 0.0.0.0

Before we look into how I set the assembly version let me state that I have seen all the libraries and custom task out there and found them to complicated form my purpose. So just 4 the fun of it I did it my way.

def Helpers.set_assembly_version assembly_info_file, version
  unless File.exist? assembly_info_file
    raise "File #{File.extend_path(assembly_info_file)}, not found";
  end
  buffer = ""
  buffer = File.open(assembly_info_file, 'r'){|f| f.read.gsub(/0.0.0.0/, version)}
  File.open(assembly_info_file, 'w'){|f| f.write buffer}
end

As is visible from the code above I use a simple regex replacement to change all occurrences of 0.0.0.0 to whatever I provided the script as the version. It is not perfect nor pretty. What I hate the most about it is the fact that I need the buffer, I would rather read and write in one single stroke. But this will do for now.

After seeing the clone_repository method the build_solution is no big surprise.

def Helpers.build_solution solution_path
  puts "Building solution : #{solution_path}"
  if system "#{DOT_NET_PATH}/msbuild.exe /noconlog /nologo /p:Configuration=#{CONFIG} #{solution_path} /t:Rebuild"
    puts "Build SUCCEEDED"
  else
    puts "Build FAILED"
    puts $?
  end
end

Looks familiar? Well it is. This is the first indication that I will have to refactor this in the future to make it “prettier”. But for now it works.

Running unit tests

This final tasks is not yet complete. After the tests are run the xml output is not transformed into HTML using the XSL file provided at the beginning ๐Ÿ˜ฆ But the rest of the task looks as following.

desc "Testing the build assemblies using NUnit"
task :test => [:compile, :prepare] do
  tests = FileList["#{Variables::SOURCE_PATH}/**/*tests.dll"].exclude(/obj\//)
  Helpers.run_tests tests, Variables::OUTPUT_PATH + "/TestReport.xml" unless tests.empty?
end

As mentioned before this is still work in progress…

Basically what happens is that I search for all Dlls which names end in tests and exclude the ones found in the obj folders.

Enter the Helper that actually runs the tests.

def Helpers.run_tests tests, report_file
  puts "Running tests"
  system "#{NUNIT_EXE} #{tests} /nologo /xml=#{report_file}"
end

Basically just runs the nunit console command with some parameters. So nothing special here. Yet.

And that is all I prepared for this post. Next time we will look at how I:

  • Get the test coverage
  • Harvest the build files
  • Package them into a zip file

thx for reading.

Building a .net sollution using RAKE

When working on a project you do accept that you have to do some choirs that you would rather not. When working on something “just4fun” this same assumption is just not true. If I am spending my free time behind the computer then I want to do things that I want to do. One of those things that I hate to do are reports and deployment packages.

The sad truth is that you need both. So what can you do about it? From my standpoint you have two options:

  1. You just do it by hand and bear the pain
  2. Automate it!

As “The pragmatic programmer” and “The productive programmer” are teaching us automation is the way to go! So we have to look at build automation. In the past I would just open a text editor and start typing a ANT build script. But for valy I wanted to do something new. And the reasons why I wanted to get from ANT are the following:

  • XML is good for structuring data but are really bad when trying to capture a process
  • It is quite hard to get a reasonable structure into the XML file
  • It takes quite some experience to reasonably read the ANT file

I hope you get the general idea that XML is not a good way of describing a build process. ANT is a great tool but the XML format is killing it for me (hope they provie a DSL in the future).

So after long consideration I opted for RAKE. Rake is a Ruby based DSL for build automation. If you have ruby installed then getting RAKE is as simple as installling an addition gem. But setting up RAKE is not a part of this post.

After opting for RAKE I had to determine what my build script will do and if I will have more than one to get the job done. Some hours later I came up with the following list of tasks it has to do:

  1. Get sources from the mercurial repository
  2. Build sources
  3. Run tests
  4. Run nCover
  5. Assemble documentation
  6. Create a deployment package
  7. Compress

Not exactly mission impossible but still a challenge.

Before we continue let’s get something clear. The scripts posted here are still work in progress and my change. If you want to follow my progress, all scripts are published at http://code.google.com/p/valy/source/browse/#hg/Build. And ofcourse you can copy and use at your delight, just let me know if you made some improvements.

I will not use any words on how I structured the code because I lost to many words already ๐Ÿ™‚

Because this post is getting quite long I have decided to break it into multiple posts.

So until next time.

Dynamic object activation, the cool way

After working with ASP.NET MVC for some time I learned to love and appreciate the little things in the framework that are done really well. One of those, that sticks out in my view, is the way that the HTML attributes are passed to view items.

Html.TextBox("UserName", Model.UserName, new {@readonly = "readonly"})

I personally think that the new {@readonly = “readonly”} syntax has something interesting. The potential usage of this are quite mind-blowing . I intend to create a generic validator activator in the near future, for now I just implemented a simple test instance initialization.

In valy a validator is initialized by providing a validation initialization contexts. One of the key components of this is a dictionary which is then used to setup the validator state (more about this in a later post). What I wanted was a streamline syntax to initialize a validator for unit testing. The original implementation of this was something like the following:

var validator = new StringInRange(new Configuration.Confiuration{Message = "Error message", ThrowException = true, Parameters = new { { "MinValue", 1 } }});

Not really that “sexy”, and I wanted to make it sexy. So I turned to our old friend reflector and took a closer look at how it is done in the ASP.NET MVC framework. What came out is the following.

[Test]
public void Validate_PassInRangeString_ValidationPasses()
{
  var validator = TestHelper.Create<string, InRange>(ERROR_MESSAGE, false, new { @MinLenght = 3, @MaxLenght = 7 });

  var results = validator.Validate(IN_RANGE_STRING);

  results.Passed();
}

This gives us a nice little unit test that is neatly separated into a setup, action, assert sections. Or in the more modern therms: Given, when, then.

And I think that this is quite simple and straight forward. There are some problems with this implementation but still I think that it is a great start.

To come now to the interesting side of the implementation. Here is the implementation of the Create method.

public static IValidator<T> Create<T, TValidator>(string message, bool throwException, object parameters)
{
  var parameterDictionary = new Dictionary<string, object>(StringComparer.OrdinalIgnoreCase);

  if (parameters != null)
  {
    foreach (PropertyDescriptor descriptor in TypeDescriptor.GetProperties(parameters))
    {
      object value = descriptor.GetValue(parameters);
      parameterDictionary.Add(descriptor.Name, value);
    }
  }

  var configuration = new Configuration.Confiuration
  {Message = message, ThrowException = throwException, Parameters = parameterDictionary};

  return (IValidator<T>)Activator.CreateInstance(typeof(TValidator), configuration);
}

If you are wandering what happens behind the scenes let me explain.

Behind the scenes a dynamic object is created with the properties MaxLenght and MinLenght. When the Create method gets this dynamic object all I does is to iterate through all the public properties and create a Dictionary where the property names are the keys and the property values become the values of the dictionary. This dictionary is then passed to the validator constructor and all is well again.

I intend to make this Object -> Dictionary step unnecessary by providing a constructor overlay that will handle this dynamic object. But this is still in the future.

What has to be mentions is that there is a problem with this. At least I have a problem. And that is when I want to set a property of the type System.Type. The problem is that the property does not get set to type System.Type but to System.RuntimeType. But this is a minor setback that should be solved in the near future.

All in all this is a great way to make unit tests simpler.

Validating a validator

If you take your precious free time and build a validation framework from the ground up you want the architecture to be perfect, or as close to perfection as possible. So you are bound to run into some problem on the way there. The problem that I was struggling with a while ago was the question hot to validate the validator?

The problem became painfully obvious when I wrote the first System.String validator that required some external configuration. To illustrate my little dilemma here is the validator code.

namespace valy.Validators.String
{
 public class Regex : BaseValidator<string>
 {
   public Regex(IEnumerable<IValidator<string>> validationParts) : base(validationParts){}
   public Regex(IValidatorConfiguration configuration) : base(configuration){}

   protected override IValidationResult DoValidate(string objectToValidate)
   {
     if (reg.IsMatch(objectToValidate, RegularExpression))
       return Pass();
     return Fail(ValidationFailReasons.Invalid);
   }

   protected override void CheckParameters()
   {
     Require(this, v => v.RegularExpression, s => !string.IsNullOrEmpty(s));
   }

   public string RegularExpression { get; set; }
 }
}

For now try to ignore the existence of the CheckParameters method.

In this special case I have to validate that a valid non-empty regular expression is given to the validator. You could do it in the validation body with a simple if statement, but then it would be in the responsibility of every validator to do it’s internal validation and communicate possible violations to the external world. This would lead to a incosistent framework in no time at all. What I wanted was a way by which the validator could define a set of rules that had to be met before the actual validation could happen.

This does not give us complete protection. In the above example the object to validate can still be null. But that is not a problem of the validator anymore.

But lets return to the method that you should have ignored until now. The CheckParameters methods purpose in live is to define a set of rules that ensure that the internal state of the validator is consistent. This means that if the method “passes” the validator is ready to use and will not throw any bogus exception if invoked.

The core of this internal validation scheme is the Require method implemented in the base validator. Using this simple building block you can put together relatively complex validation schemes like show in the next code sample.

protected override void CheckParameters()
{
 Require(this, v => v.MaxLenght, i => i > MinLenght);
 Require(this, v => v.MinLenght, i => i < MaxLenght);
 Require(this, v => v.MaxLenght, i => i > 0);
}

The complexity of the example above will not win you and noble prizes but will ensure that your range validator can not be given an invalid range. And this works for me(until now)!

This is not perfect and the thing that is annoying me the most is the fact that the current object has to be passed as a parameter to the function. This is necessary because the Require method is defined on the parent class and therefore has to somehow link to the actual overriding class. But this is just a minor hiccup. There are other potential problems that are not causing me any headaches yet, so they will get fixed when they start to hurt.

All in all this scheme works and I am quite happy with it. The last part I would like to show you is the implementation of the Require method.

protected void Require<TValidator, TParam>(TValidator validator, Expression<Func<TValidator, TParam>> property, Expression<Predicate<TParam>> predicate)
{
 var propertyValue = property.Compile().Invoke(validator);
 if (!predicate.Compile().Invoke(propertyValue))
   throw new ValidatorInitializationException(
             string.Format("Validation inicialization failed on predicate : {0} for member : {1}",
             predicate.Body, property.Body),
             GetType());
}

As you can see the implementation is nothing special. Just two functions that the evaluated at runtime and if predicate fails a ValidationInitializationException is thrown. Really there is nothing more to say about this topic.

Hope that this helped someone out there and if someone out there seas something wrong with this write a comment and tell me about it.

Constraint-Based Asserts and NUnit

In this post I will concentrate on NUnit and it’s “new” feature. Well not exactly new,ย  but a feature that are around since version 2.4 (current version is 2.5.2). This feature, that I would like to present here, is the constraints based assertion.

Before we start with the new and shiny let’s look at the old and proven.

[Test]
public void FooTest()
{
  var result = DoSomething();

  Assert.AreEqual("something", result.Text);
}

This is what we know and this is what we work with. The problem is that people manage to confuse the expected and actual parameters of the assert statement. Which then results in funny little mishaps. But this is not the theme of this post.

What I would like to show here is the constraint based assertion. To get us into the subject here is a little example.

[Test]
public void FooTest()
{
  var result = DoSomething();

  Assert.That(result, Is.EqualTo("something"));
}

Looks nice and it is. What I find good is that it gets harder to confuse the expected and actual values.

The best thing is that you can chain the constraints. This would then look like the following.

[Test]
public void FooTest()
{
  var result = DoSomething();

  Assert.That(result, Is.Not.EqualTo("gnihtemos"));
}

The next thing is not really in context with the posts title but I think that it is such a nice feature that it is worth mentioning. The “thing” in question is the exception assertion. To get the record straight I find nothing wrong with the ExpectedExceptoin attribute but this new version is just cleaner.

So as before lets look at the old and proven before we take a look at the new and shiny. The old and proven would look like this.

[Test]
[ExpectedException(typeOf(SomeException))]
public void ExceptionTest()
{
  ThrowsSomething();
}

This is nice and nicely extendable. What they came up with is a non attribute version that works really well.

[Test]
public void ExceptionTest()
{
  Assert.Throws<SomeException>(() => ThrowsSomething());
}

This is nice and becomes quite readable without being to invasive. And I personally quickly fell in love with this way of testing for exceptions.

With this much goodness there has to be something bad. And there is. What you will find hard to do is to integrate your custom constraints into NUnit. You actually have two variants how to use your custom constraints without poking into the framework itself.

Option one is to use a build in construct and get code that would look something like this.

[Test]
public void MyCustomConstraintTest()
{
  MyObject result = DoCustomStuff();

  MyConstraint constraint = new MyConstraint();
  Assert.That(result, Has.Some.Matches(constraint));
}

And option two which requires less code but does not look that good.

[Test]
public void MyCustomConstraintTest()
{
  MyObject result = DoCustomStuff();

  Assert.That(result, new MyConstraint(););
}

There is always the option to hack at the framework and integrate your custom constraint. But I would not recommend it.

As a last thing lets take a look at how you can write your own constraint. It is actually quite simple. All you have to do is to derive from the Constraint class. The base class itself is abstract and features many methods that you can override. But in most cases you will just need the two abstract ones.

Your custom constraint could look something like this:

public class MyConstraint : Constraint
{
  public bool Matches(object actual)
  {
    if(actual is MyObject)
      return ((MyObject)actual).Quantity > 0 ? true : false;
    return false;
  }

  public void WriteDescriptionTo(MessageWriter writer)
  {
    writer.write("MyConstraint failed");
  }
]

So this is the gist of it. I hope that you got a little insight into the new features of NUnit.

All that is now left to do is to use it ๐Ÿ™‚