Bulk Delete

Maybe it looks like I am having a personal vendetta against OR mappers but this is not the case. In the last week I have seen a otherwise “good” applications performance go overboard because of a misuse of an OR mapper. So I just want to share my findings and maybe save someone, somewhere, some time.

My poison today was the process of deleting lot of records from a simple table. A simple task if you care to get your fingers dirty in SQL, regardless of the flavor.

Let’s say that you have a table of users and you developed a strong hate for the letter ‘H’ (or any other letter in the alphabet). So you woke up with a burning hate of the letter ‘H’ and you can not but delete all users which name starts with the letter ‘H’.

All you need to do is to execute this simple query:

DELETE FROM Users
WHERE UserName LIKE 'H%'

And done…

I agree that this scenario is a little…simple, but even if the structure of the database schema would be a little more complicated you could still do it with very few queries.

Now lets take a look at how this happens with a OR mapper.

With an OR mapper we first have to get a hold of the users we want to delete. The easiest (and most common) way to do this is to load them into memory. That meas that a sql query will be fired to read all users that we are going to delete.

SELECT * FROM Users WHERE UserName LIKE 'H%'

Note: Please take into consideration that this query does not happen if the users are already loaded into memory.

So now you have them in memory. Now you can call Delete on every one of them. Te default behavior of any OR mapper is to fire of individual delete statement, one for each user you want to delete.

DELETE FROM Users WHERE UserId = 1
DELETE FROM Users WHERE UserId = 2
...
DELETE FROM Users WHERE UserId = 5000

There is no arguing that this is less efficient that doing all of it in one big delete statement. The situation gets even worse if there are triggers or a cascade delete 😦

But before you start fainting stop think. I found a solution, sadly only for Entity Framework. For the rest I just bypass the OR mapper and use the underlying connection to directly fire SQL statements (how this is done is specific with the OR mapper used).

This breaks the “database agnostic” advantage that OR mapper give you. But to be honest   I think that this advantage is a myth, so I lose nothing here. And besides in all the SQL flavors the vanilla delete statement is the same.

So this is another problem I have with OR mappers and my solution to the same problem. Please keep in mind that this becomes a problem with massive data quantities. What “massive” means for your application is a question that can only be individually answered.

But before I go I want to share my latest story. There was an operation in an application I work on that fired 80k to 100k requests of those 80% to 90% were delete statement. Before I started the operation took 50 minutes and would often fail because of timeouts. After the “optimization” the same operation working with the same data finished in 4 to 6 seconds.

Advertisements

User.Deleted

Interesting title, is it not?

But let me explain. I have noticed, lately, that people have stopped using their brains and have given their data storage over to heuristics. What I mean with that? I mean that more and more people are using their OR mapper to generate their DB schema. And I think that this is not the greatest idea in the world. Maybe this is just me but I hope not.

Point in case is the title.

In the databases “Users” table there may be a column that is labeled Deleted. What this column is doing is telling you that the users is actually deleted. You might be asking yourself that if the user is deleted then why just not delete the user record. Well there are many reasons but the top one is data-integrity. There are ways to work around this but if not pressed by the law we tend to not do it.
For those still in the dark here is a quick recap of why this is done. When the user entry is created we tend to bind other data in the database to that user. When the user then decides to leave our system we tend to keep some of the data he has provided or data that accumulated as a result of him using the system. To keep the database integrity existent we eighter have to reassign all the data to another “bogus” user or just mark the user as nonexistent. We obviously go for the later 🙂

So the Users.Deleted makes sense in the context of a database but how about in the context of your domain model? In the data domain this makes little sense because if the user is deleted (non-existed for the application) the user should not be loaded into the data domain.

This is just one small example where the data domain and the database schema are not equal. There are countless others and their number will increase with the complexity of the application.

Day 1 – Pex and Moles

Today was the first day of  my .NET 4.0 training. And I would like to share the highlights of each day. Well at least the highlights for me. So this is the first day of 5.

Introductions and first days are always slow. So this was no exception. There were some basic introduction things for people who have newer came into contact with electricity. But a good 5 hours in something was presented that caught my eye: Pex and Moles.

Pex and Moles are actually two separate peaces of software:

  • Pex automatically generates test suites with high code coverage.
  • Moles allows to replace any .NET method with a delegate.

I am still unsure about the real day-to-day value that moles will offer me. This is not because the software is not up to pair with what I would use but that I am mostly working on code I can change and refactor so that the need for such a tool is not needed. But more on that later on.

First you will need a copy of Pex and Moles. So here is the download link. You can even find a version for the express version (non-commercial). After you have the file just let the installer do its work an tolerate the 2 ~ 3 times your focus will be stolen (it is worth it).

First you need some code to let pex have fun with. I just quickly wrote a little class with one method. And here it is:

namespace PexAndMoles
{
    public class Calculator
    {
        public int Add(int one, int two)
        {
            if(one == 0 || two == 0)
                throw new ArgumentException();

            if(one < two)
                throw new ArgumentException();

            return one + two;
        }
    }
}

There is a reason for all those ifs in there. It is for the sole reason to give pex something to work on 🙂

So to get started just left-click on the method you want to “work on”. You should see something like this.

Run PEX

Pex will ask you which testing framework it should use. You can choose from all the major testing framework. But to keep it simple I chose to stick with MSUnit.

Select testing framework

After a short time where you are tempted by a “follow us on Facebook” link the results are presented and if you are lucky (depending on the code complexity) pex will find all major test scenarios for your method.

In my case this is what it came up with.

Pex results

And those are all the test scenarios I wanted (or even expected).

Now that you have your tests you want to keep them for later (most probably some sort of regression testing). So pex can help you there to. If you select all created “results” a “Promote…” button will appear. If pressed it adds the “results” as unit tests into your testing project or creates a new one and adds them there.

The code generated is confusing at worst and funny at best. It is not the go-to example of good/clean code. But it is auto-generated and can be regenerated if future changes break the tests. The naming convention is “acceptable”. Before I rant too much here is the code generated:

namespace PexAndMoles
{
    [TestClass]
    [PexClass(typeof(Calculator))]
    [PexAllowedExceptionFromTypeUnderTest(typeof(ArgumentException), AcceptExceptionSubtypes = true)]
    [PexAllowedExceptionFromTypeUnderTest(typeof(InvalidOperationException))]
    public partial class CalculatorTest
    {
        [PexMethod]
        public int Add(
            [PexAssumeUnderTest]Calculator target,
            int one,
            int two
        )
        {
            int result = target.Add(one, two);
            return result;
            // TODO: add assertions to method CalculatorTest.Add(Calculator, Int32, Int32)
        }
        [TestMethod]
        [ExpectedException(typeof(ArgumentException))]
        public void AddThrowsArgumentException547()
        {
            int i;
            Calculator s0 = new Calculator();
            i = this.Add(s0, 0, 0);
        }
        [TestMethod]
        [ExpectedException(typeof(ArgumentException))]
        public void AddThrowsArgumentException81()
        {
            int i;
            Calculator s0 = new Calculator();
            i = this.Add(s0, 1, 0);
        }
        [TestMethod]
        public void Add520()
        {
            int i;
            Calculator s0 = new Calculator();
            i = this.Add(s0, 1, 1);
            Assert.AreEqual<int>(2, i);
            Assert.IsNotNull((object)s0);
        }
        [TestMethod]
        [ExpectedException(typeof(ArgumentException))]
        public void AddThrowsArgumentException470()
        {
            int i;
            Calculator s0 = new Calculator();
            i = this.Add(s0, 2, 3);
        }
    }
}

And that is pex in a nutshell. At least that is what I was able to find out about it in the half day I spend with it.

I know that Moles was not mentioned here but I would like to spend some more time with it before writing about it in more detail, besides the post is long enough.

And that is all the time I have today.

Thx for your time.

Log4Net custom LayoutPattern

A few days ago I had a nice little task at my workplace: To add a custom “key” to the log4net pattern layout.

I like log4net and was quite happy to be given time to dive into the belly of the beast. But I quickly noticed that most of the information on the web is outdated and simply wrong with the current version.

The most useful information I got was from a blog post from Scot Hanselman http://www.hanselman.com/blog/CreatingYourOwnCustomPatternLayoutPatternParserAndPatternConvertorWithLog4net.aspx. Sadly log4net has moved along since that blog was written.

Where I failed in the current version was the fact that the PatternParser class is sealed thus making any inheritance attempts void.

After some Googleing and reading the log4net SDK I found the solution to be simple and elegant at the same time. All you have to do is to create two classes:

  • PatternConverter
  • PatternLayout

And you are done.

The first class you have to implement is a PatternConverter which will actually handle the new key you want to add. The implementation is straight forward and easy to do. An example would look like this:

using System.IO;
using log4net.Util;
namespace MyCustomLog4NetPattern
{
  public class CustomPatternConverter : PatternConverter
  {
    protected override void Convert(TextWriter writer, object state)
    {
      writer.Write("Message to add");
    }
  }
}

The object that gets passed in is actually the LoggingEvent that has triggered the log operation. Sou if you need additional resources from there they are available. Additionally to this you can access all the environment variables that you thing should be placed in the log.

A little word of advice: Do not apply any formatting to the string you write. So no newlines or any other fancy formating.

After that is implemented we can move on to the PatternLayout. This is again an almost empty inheritance story. Which will look like this:

using log4net.Layout;
namespace MyCustomLog4NetPattern
{
  public class CustomPatternLayout : PatternLayout
  {
    public CustomPatternLayout()
    {
      AddConverter(new ConverterInfo{Name = "your_key", Type = typeof(CustomPatternConverter )});
    }
  }
}

And that is it! You are done. Every time that the string %your_key appears in the appender layout your little converter will be called and the placeholder replaced.

If now you want to use your custom “key” on a rolling file appender the configuration would look like this:

<appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender">
  <file value="Log/log.txt" />
  <appendToFile value="true" />
  <rollingStyle value="Composite" />
  <datePattern value="yyyyMMdd" />
  <maxSizeRollBackups value="10" />
  <maximumFileSize value="1MB" />
  <layout type="MyCustomLog4NetPattern.CustomPatternLayout, MyCustomLog4NetPattern">
    <conversionPattern value="%date [%thread] %-5level %logger - %message%newline - My stuff: %your_key%newline" />
  </layout>
</appender>

Ant that is all that there is to this. easy and simple solution.

Hope that this will save somebody some time in the future 🙂

Generic section handler – HowTo

After being on vacation for the last two weeks and therefore suffering a little from internet abstinence syndrome,  I am back with a vengeance. No not really but I do feel refreshed and ready to take on anything.

So to get me a little in to the “flow” here is a post that I was planning on publishing before vacation time struck.

It is nothing special just a little testing helper I wrote to get some much needed tests done.

If you are developing a .NET component that is going to be part of a bigger application (or solution), then chances are high that you will provide a so called “custom configuration section”. There are other ways to configure a component but a custom configuration section is quite elegant.

So youz start to develop your custom configuration section. I hope that I do not have to cover the nuts and bolts of how to get this done. What I want to convey here is the aswer to the question how to test this.

After you implement you object structure you are faced with quite a nice little problem of how to verify that your configuration is “bullet prof”? There are two answers to this question:

  1. You write the XML configuration for all the possible permutations and combination’s of how your configuration section could be used
  2. You test each configuration “element” on its own and verify that each functions ass desired.

No. 2 is definitely the better option and therefore should be used. And there is another argument for option no. 2: If the configuration section is relatively complex then the number of combination goes through the roof. If you ever put the effort into testing custom configuration section then you know that this is not easily achieved.

But neither the less lets get to explaining how this is done. My testing framework of choice is nUnit so all code samples will be done using it.

I take that you have your configuration elements already defined. So now all you need to do is to write a simple generic configuration section handler

using System.Configuration;
using NUnit.Framework;

namespace Tests.Helpers
{
 public class GenericSectionHandler : ConfigurationSection where T : ConfigurationElement
 {
 private const string ELEMENT_NAME = "TestElement";

 [ConfigurationProperty(ELEMENT_NAME, IsRequired = true)]
 public T GetConfiguration
 {
 get
 {
 return (T) this[ELEMENT_NAME];
 }
 }

 public static T Get(string sectionName)
 {
 var sectionHandler = ConfigurationManager.GetSection(sectionName) as GenericSectionHandler;
 if(sectionHandler == null)
 Assert.Fail("section [{0}] does not exist", sectionName);

 return sectionHandler.GetConfiguration;
 }
 }
}

After you have that in place inside your code you have to add the configuration group to your test project configuration file. This could look like the following:

<sectionGroup name="SectionTests">
 MyTest" type="Tests.Helpers.GenericSectionHandler`1[[MyApplication.Configuration.MySection, MyApplication]], Test"/>
</sectionGroup>

Now you can just add your configuration scenario.

<SectionTests>
  <MyTest>
    <TestElement>
      <!-- YOUR CONFIGURATION ELEMENT -->
    </TestElement>
  </MyTest>
</SectionTests>

Now you can use your configuration section scenario in your actual unit tests.

public void MySectionTests()
{
  var mySection = GenericSectionHandler.Get("SectionTests/MyTest");
  // do your assertions
}

And that is that.

Not the best post I ever had I know but since I was not blogging for quite some time I hope that this will be a nice start of getting into the habit of blogging 🙂

Resharper 5.1 first look’s

About a year ago I started using JetBrains reSharper or R# for shorts and have been hooked ever since.

So it came naturally that I followed the upgrade wave and installed R# 5.1. And I must say that I blew me away on more than one level. The refactoring and navigation support is as good as ever and the integration with VS 2010 is flawless. But there was a dark side to the upgrade.

While my PC at home had no problem running VS2008 with R# 4.* and VS2010 also did not impress my little dual-core machine, but at soon as R# 5.1 was installed the computer just stopped.

I hist major performance problems. Especially when the background “compiler” kicked in. The feature set is impressive but still I find it hard to use because writing “return foo;” takes 30 sec+.

I will come back with another post describing all the cool pictures. But just had to get the little performance problem out of the door.

And now off to my well deserved vacation 🙂

Albacore and .NET 4.0

I love albacore and that’s a fact! I use it to build all my .Net projects (some I even deploy with it). But after the introduction of the .NET 4.0 framework there has been a problem. The msbuild task has a hard-coded path to the msbuild.exe and that points to the 3.5 framework.

This is not a big problem but still does not look good in the rake script. Sadly at this point in time there is still not a “nice” fix for this problem but there is a “good” workaround.

msbuild :msbuild do |msb|
 msb.path_to_command =  File.join(ENV['windir'], 'Microsoft.NET',
 'Framework',  'v4.0.30319', 'MSBuild.exe')
 #... other settings here
 end

Hope this helps you stay on albacore until this little thing gets fixed 🙂