# Sunday, 07 November 2010

At the Fall 2010 DevConnections keynote, Scott Guthrie demo’d NuGet (formally Nupack). After seeing how easy NuGet makes it to install project references and resolve dependencies, I had to try it!

After installing CTP1 of NuGet and installing a bunch of packages, I found out I needed the latest version (the 10-26-2010 build) in order to install some of the packages I wanted. Well, between CTP1 and the 10-26-2010 build, the NuSpec file format has changed slightly, making packages created with CTP1 and the 10-26-2010 build incompatible.

I decided to pop open a .nupkg (.nupkg the file extension of a NuGet package) package file using 7-Zip (a .nupkg file is really just a .zip file). Inside I saw a .nuspec file.

Opening up the .nuspec file reveals its an xml file. After comparing a CTP1 and a 10-26-2010 build version of a .nuspec file I discovered the only difference is an xml namespace reference.

CTP 1 version of a .nuspec file:

<package>
    <metadata>
        ...

10-26-2010 build version of a .nuspec file:

<package xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <metadata xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">

It took a few minutes to update all my .nupkg files to the new format and now everything works again!

.NET | HowTo | NuGet
Sunday, 07 November 2010 11:08:09 (Alaskan Standard Time, UTC-09:00)
# Friday, 03 April 2009

Background

For a project I'm doing, I have a task model for the various pieces. In the beginning, I was manually creating a List<ITask>. As I kept adding tasks to run, I started thinking about hacking some code together to rifle through my assembly and pull back all the classes which implement ITask.

Then I remember hearing about Managed Extensibility Framework (MEF). I did some searching, found the MEF home page, and even read the MEF overview. But none of that told me what I really wanted to know, what's the fastest way to get started using MEF as a component loader?

I did some more searching and found the dnrTV episode "Glenn Block on MEF, the Managed Extensibility Framework" and after 20-30 minutes they finally got down to how to create a plugin for your app.

But what I really wanted, and I bet a lot of others, is a quick start guide for creating a plugin.

Solution

Download the latest version of MEF, as of this writing its Preview 4. Grab the System.ComponentModel.Composition.dll from the bin folder and stash it somewhere. Make a reference to said dll in your project.

On your plugin class, add Export attribute:

[Export(typeof(IPlugin))]
public class Foo : IPlugin { ... }

In your plugin consumer, create a property to hold your plugins, and add the Import attribute:

[Import(typeof(IPlugin))]
internal IList<IPlugin> _myPlugins { get; set; }

Now, tell MEF where to get the plugins at (line 2), and where you want MEF to fulfill any plugins (line 5):

private void LoadPlugins() { var catalog = new AssemblyCatalog(Assembly.GetExecutingAssembly()); var container = new CompositionContainer(catalog); var batch = new CompositionBatch(); batch.AddPart(this); container.Compose(batch); }

 

I put my call to the LoadPlugins method in the constructor.

Now, spin through your plugins and do the work:

Console.WriteLine("Found {0} plugins", _myPlugins.Count);
foreach (var plugin in _myPlugins) {
    Console.WriteLine(plugin.Name);
}

 

Download the complete source to this (really, only about 10 extra lines to glue things together) and have fun!

kick it on DotNetKicks.com
.NET | C# | HowTo | MEF
Friday, 03 April 2009 20:37:33 (Alaskan Standard Time, UTC-09:00)
# Sunday, 15 March 2009

A long time, and many many moons ago I took wrote some code to interface our build server with a network power switch we had laying around the office. We used this to turn on and off lava lamps to indicate the status of the build. Some might ask why we didn't use the X10 support that is already in CCNET, and the answer mostly is cost, and the fact that X10 wouldn't work in our environment.

That was 2.5 years ago. Since then, our team has become more distributed. We have one guy working in Ann Arbor, MI, and occasionally have others telecommuting. So not everyone can see the status of the lamps. Also in the 2.5 years since that code was written, a little thing called Twitter has become very popular. I did some research, and found Tom Fannings nAnt Twitter task and briefly considered using it.

But in the end, I just couldn't resist adding my own developer gold plating and thought it would be neat if we could also issue commands to the build server via tweets. So with that feature in mind, I had to write it myself.

To start out with, I used Yedda's C# Twitter library. The Yedda library is a pleasure to work with, it makes sending a tweet as simple as

new Twitter().UpdateAsXML(_username, password, messsage);

One thing the Yedda library didn't have, was the ability to query for your Twitter replies. A quick look through the source, and the Twitter API docs and I realized this would be trivial. The details of how I did it aren't important to this post, but if your curious, you can look at lines 567 - 627 of the Yedda source included with this post.

I'm not going to dive to much into how the whole project works, but here is a high level. The software runs as a Windows Service, leverages the CCTrayLib assembly for Cruise Control.NET to do all the heavy lifting. It polls the Cruise Control.NET server every 5 seconds, and fires events when things happen. The two events we want are the Polled and BuildOccurred events.

These events allow us to intern kick off our own events based on the state of the build. Based on the state, we grab the appropriate actions to run as defined in the BuildActions.xml file. This maps a build state to a set of actions. In the case of a "Building" action, we send a Twitter, with a message template of "{PROJECT} is building", and turn ports 1 on, and 2 off on our ePower switch. Easy enough.

But how do we take in commands? I pondered this for a minute than realized it would be trivial to leverage the Twitter replies API for this. But what about security, we can't have just anyone sending commands to our build server. This is where the Twitter friends API comes in handy. In order to issue commands to our build server, the account our build server uses has to have you as a friend, not just a follower.

The first action I implemented was the force build command. The idea for the grammar came from a joke reply @orand sent our Twitter bot. After that, I thought it might be nice to be able to get the list of projects, get a projects status, and ask for help. So that leaves us with a total of 4 commands.

A small bunny trail

When I first wrote the command parser, it looked something like this:

if (msg.StartsWith("force build "))
    ProcessForceBuildCommand(msg, user);
else if (msg.StartsWith("get projects"))
    ProcessProjectListsCommand(msg, user);
...

I thought about it for a while, and thought there had to be an easier way. We are using .NET 3.5 after all, with all its lamba, LINQ and new and improved delegate goodness. I did some research and came upon the Action<T> (and Func<T>) delegate type. And came up with this implementation for registering commands

var commands = new Dictionary<string, Action<string, string>>
    {
        {"force build ", ProcessForceBuildCommand},
        {"get projects", ProcessProjectListsCommand},
        {"get project status ", ProcessProjectStatusCommand},
        {"help", ProcessHelpCommand}
    };

Once we have all the commands registered, we can use some lambda and LINQ magic to act upon the commands issued

var key = _commands.Keys.Where(msg.StartsWith).First();
if (!string.IsNullOrEmpty(key))
    _commands[key].Invoke(msg, user);
else
    SendTweet(string.Format("@{0} I'm sorry, I don't understand you. Maybe you should ask for help", user));

Get to the point, I want to see the source code

The code overall is fairly well structured (if I do say so myself), although there is one area where I did violate the separation of concerns rule, the TwitterManager class knows more about Cruise Control.NET than it should. But, given that this is a very simple internal project, and not for public consumption, I'm mostly OK with that :)

I've included the source to our entire build monitor, I hope you find it useful.  We are using a very old version (1.1) of Cruise Control.NET in our environment. If your using a more recent version, you will probably need to swap out the Cruise Control CCTrayLib and Remote assemblies for something more recent, and invert some of the commented out code in the SetupCruiseControl method of BuildServerMonitor.cs

If you have any questions, or find/fix any bugs, please feel free to leave a comment, or send me a tweet, my username on Twitter is akcoder.

Download the Afhcan.BuildMonitor

.NET | C# | HowTo
Sunday, 15 March 2009 19:59:22 (Alaskan Standard Time, UTC-09:00)
# Friday, 20 February 2009

One of our applications has a Windows service in it. To make debugging and running this service easy,we have a winform in the service which can be activated by passing a command line switch. Simple enough. Our service does all its logging with log4net. I wanted to be able to put the output of the logging on our development form, but how?

After looking at various things, I realized what I needed to do was create a log4net appender, and add the appender to the log4net logger and at regular intervals, grab the contents of the logger.

Solution

I created the below appender which uses a StringBuilder as the backing store. It takes in one bool param in the contructor which allows you to specify if you want the log to be built up in reverse.  This is useful if you want to display the most recent event at the top.

public class StringBuilderAppender : log4net.Appender.AppenderSkeleton
{
    private System.Text.StringBuilder _builder = new System.Text.StringBuilder();
    private readonly bool _invert;

    public StringBuilderAppender(bool invert) { _invert = invert; }

    public string Text { get { return _builder.ToString(); } }

    protected override void Append(log4net.Core.LoggingEvent loggingEvent)
    {
        var msg = loggingEvent.RenderedMessage;
        
        if (_invert)
            _builder = new System.Text.StringBuilder().AppendLine(msg).Append(_builder);
        else
            _builder.AppendLine(msg);
    }
}

Now we need to add our new appender to the logger. I found this helper method someone wrote.

public static void AddAppender(string loggerName, IAppender appender)
{
    log4net.ILog log = log4net.LogManager.GetLogger(loggerName);
    log4net.Repository.Hierarchy.Logger l = (log4net.Repository.Hierarchy.Logger)log.Logger;

    l.AddAppender(appender);
}

Finally, lets put it all together:

StringBuilderAppender appender = new StringBuilderAppender(true);
AddAppender("MyLogger", appender);

while(true) {
    System.Threading.Thread.Sleep(5000);
    someControl.Text = appender.Text;
}
.NET | HowTo | Logging
Friday, 20 February 2009 22:18:15 (Alaskan Standard Time, UTC-09:00)
# Tuesday, 10 February 2009

At our organization, we have to globalize our software. Making sure you've gotten all the strings globalized can be a real pain. You have to create a new language resource that looks nothing like your native language, then set the Thread.CurrentThread.CurrentUICulture to the culture of the new language resource you created.  Such a pain.

While pondering that this afternoon, I came upon something better. The .NET CultureManger looks for the most specific resource file, then works its way back to the least specific. For example, given the following resource files:

  • i18n
  • i18n.en
  • i18n.en-US

If your current culture was en-GB, the CultureManger would use the resource file for i18n.en, since there is no i18n.en-GB. But, if your current culture was da-DK, it would use i18n.

In our software, we have i18n, and i18n.da-DK resource files, plus i18n.fr-FR which is a special, internal resource file. What's so special about the fr-FR resource file you ask? The fr-FR resource file is really the i18n resource file which as been transformed to replace all the localized text with dashes.

Why did we do this? Because with all the English text replaced with dashes, it makes it very easy to see which text in the application hasn't been globalized. The down side to this, is we have to change the CurrentUICulture (and CurrentCulture) to fr-FR in order to test this.

Solution

The solution is actually quite simple, rename the fr-FR resource file to i18n.en-US (or what ever the ISO code for your culture is).  Now when your testing, the CultureManager will pick the most specific resource file, and use that. But, don't forget to remove en-US folder from the final build folder before you deploy your application, lest users get your debug language resource.

kick it on DotNetKicks.com
Tuesday, 10 February 2009 14:11:48 (Alaskan Standard Time, UTC-09:00)
# Thursday, 29 January 2009

As our production database gets more and more data in it, we noticed that things were slowing down. I ran SQL Profiler trying to figure out if we needed to ad more indexes, better arrange the data, or anything we could do to improve the performance. After about an hour of running queries, creating indexes, profiling, and looking at execution plans; I had gotten barely anything for performance gains.

I then decided to take a different track and wondered if our indexes needed to be rebuilt.  A quick Google later, and I came across an article on Tips for Rebuilding Indexes over at SQL Server Performance.

The gist of the article is your indexes get fragmented and need to be rebuilt. I ran the script present in the article and noticed a substantial improvement in performance.  I failed to capture metrics to quantify the performance improvements, but the users definitely noticed.

Without further ado, here is the script from the article:

If your using MS SQL Server 2000:

USE DatabaseName --Enter the name of the database you want to reindex

DECLARE @TableName varchar(255)

DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = 'base table'

OPEN TableCursor

FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
DBCC DBREINDEX(@TableName,' ',90)
FETCH NEXT FROM TableCursor INTO @TableName
END

CLOSE TableCursor

DEALLOCATE TableCursor
 
If your using MS SQL Server 2005:
USE DatabaseName --Enter the name of the database you want to reindex

DECLARE @TableName varchar(255)

DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = 'base table'

OPEN TableCursor

FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
ALTER INDEX ON schema.table REBUILD/REORGANIZE
FETCH NEXT FROM TableCursor INTO @TableName
END

CLOSE TableCursor

DEALLOCATE TableCursor
HowTo | SQL
Thursday, 29 January 2009 10:04:54 (Alaskan Standard Time, UTC-09:00)
# Saturday, 24 January 2009

Background

Call me a control freak, but I like to see all the shared volumes on my Mac. I could open Terminal and cd to the Volumes folder, or I could use Finder and Go -> Go to Folder to see everything that OS X has mounted in my Volumes folder. But thats kind of a pain.

Solution

After a little bit of digging around, I found out about SetFile. SetFile is a command line utility that allows you to set the file attributes on files in an HFS+ directory. After figuring out the parameters for it, I came up with this little ditty to make the Volumes folder show up under "Macintosh HD." Run this in Terminal:

sudo SetFile -a v /Volumes

With this command, you are setting the visibility attribute on the Volumes folder to visible. To reverse the process, change -a v to -a V. Now open up "Macintosh HD" and you should now see all the volumes mounted on your Mac!

Screenshot
Saturday, 24 January 2009 20:59:00 (Alaskan Standard Time, UTC-09:00)
# Friday, 23 January 2009

Background

When I originally got Mac Mini, Mac OS X 10.5.1 was out. I did a quick google search and found simple steps to follow to get Time Machine to backup to a network volume on my Linux server. Everything worked great!

When Mac OS X 10.5.2 was released a few weeks later, Time Machine would no longer backup to my file server. I did a lot of googling and found I wasn't the only one with the problem, but couldn't find any solutions. I've given it the old collage try a few times since then trying to get it working again, as recently as mid December 2008, but to no avail.

I don't know what possessed me to try and get it working again this time, but I did. And I won! It wasn't an easy battle, nor was it an epic battle. But it was a battle none the less.

Problem

I found the various postings on the net about how to everything working, including Hupio's OSX Timemachine and Samba/Windows share. But nothing really worked. I kept getting the error message "the backup disk image could not be mounted."

I almost gave up again, but decided to google the error message. And came across a few more sites, but they didn't have anything of interest. I don't know why, but I tried to create my sparse bundle on the network share itself, instead of on the Mini and moving it to the network share. That got me the error message "hdiutil: create failed - Operation not supported".

Googling that error message led me to Viraj's post about Time machine + AFP + Ubuntu - Samba. Viraj got everything working by installing the AFP service on his Linux (Ubuntu) server. He linked to How to: Install Netatalk (AFP) on Ubuntu with Encrypted Authentication which was perfect because I happen to be running an Ubuntu server.

Solution

If your using a Linux file server like I am, and want to backup your Mac using Time Machine to your file server, follow these steps:

1. Install AFP on your Linux server

2. Figure out where you are going to store the backups on your file server. I stored mine in /media/backup/TimeMachine. You will need to edit your /etc/netatalk/AppleVolumes.default file and point it to the directory:

sudo echo "/media/backup/TimeMachine \"Time Machines\"" >> /etc/netatalk/AppleVolumes.default

3. Restart netatalk

sudo /etc/init.d/netatalk restart

4. Mount your "Time Machines" volume. Finder -> Go -> Connect to Server and enter afp://IPADDRESS/Time Machines

5. Create a sparse bundle. If your OS volume is case-sensitive like mine, run this in terminal:

hdiutil create -library SPUD -size 50g -fs "Case-sensitive Journaled HFS+" -type SPARSEBUNDLE -volname "TimeMachine for YOURNAME" "YOURMACSNAME_MACADDRESS.sparsebundle"
this will create a 50 GB sparse bundle for Time Machine. If your OS volume is not case-sensitive (the default) use this command:
hdiutil create -library SPUD -size 50g -fs "Journaled HFS+" -type SPARSEBUNDLE -volname "TimeMachine for YOURNAME" "YOURMACSNAME_MACADDRESS.sparsebundle"

I'm not going to go into the details about the command line, the link above goes into greater detail. You will need to read the article so you can plug the correct values in.

6. Move your newly created sparsebundle to your "Time Machines" share:

mv mini_MACADDRESS.sparsebundle /Volumes/Time\ Machines/

7. Configure your Mac to allow backing up to a network share:

defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1

8. Finally, open Time Machine, click "Change Disk" and point to your "Time Machines" volume. In 2 minutes, Time Machine will start to backup your data to your Linux network file system!

A quick note about the conventions used above

all commands blockquote are supposed to be run in Terminal. All commands that start with a sudo (items 2 and 3) are supposed to be run on your Linux server

Friday, 23 January 2009 22:42:58 (Alaskan Standard Time, UTC-09:00)
# Tuesday, 06 January 2009

After much time, trial and error, I was finally able to get Visual Studio's remote debugging features to work.  In my travels around the Internet, no one seems to have compiled all the steps to make the process work successfully and seemlessly into one page, this is my attempt.

Setting up

Complete all the steps listed on the How to: Set Up Remote Debugging article on MSDN.

Permissions

Don't try and fight the cross-domain permissions battle, its just not worth it.  If the machine your trying debug is not on a domain, then don't run VS from a machine thats on the domain. Make sure the same user with the same password exists on both machines.

PDB's

Put the PDB files on the remote machine (target) in the same folder as your app.

Source Code

In our environment, our build server produces an installer, and it also produces the PDB's we used in the above step. If we want Visual Studio to automatically pick up the right source files when we are debugging, you need to store the source code on your host machine in the same location as the machine that built your PDB's.

For example, in our world, on our build server the code lives on d:\code\projectname\code\ so on your host machine, you would store our source code on d:\code\projectname\code

Results

If you have followed all these steps, and if the stars align just right, you should now be able to step through your source code when remote debugging.

Questions/Comments

Questions, comments or just can't make it work for some reason? Leave a comment and fill out your actual email address nd I'll try to address it!

 

kick it on DotNetKicks.com

Tuesday, 06 January 2009 13:20:47 (Alaskan Standard Time, UTC-09:00)