Just trying to make sense of things...

Language Designers Should Publish Their Compiler Front-End

Monday, 25 February 2008 09:58 by jordan.terrell

As of late I've been doing a lot of research on understanding and designing computer languages and compilers.  Right now I'm just starting to read two books: Programming Language Pragmatics (Second Edition) and Engineering a Compiler.  Even though I have not read very far into either of these books, I've come to the following conclusion:

Language designers should publish the source code to their compiler's front-end.

Now, in case you don't know what the front-end of a compiler is, take a look at what Wikipedia has to say about it.  To put it in oversimplified, laymen's terms: "The front-end basically parses the source code, and makes sure it is a valid."  That's it, it doesn't actually spit out your target code (e.g. machine code, MSIL, ect..), it only parses it to make sure it can understand the source code.  The back-end of the compiler is responsible for the rest (for those who are compiler savvy, I know this is overly simplified).

With that in mind, I now come back to my original statement:

Language designers should publish the source code to their compiler's front-end.

No doubt many of you have noticed the surge of development tools around refactoring and code analysis.  Most of these rely on some language parsing component to understand the source code you are working with and make suggestions.  Developing tools like these would be much simpler and consistent if everyone relied on the same compiler front-end.  Plus, I would think that it would only help to drive further adoption of the language.

I've got a few code analysis ideas that I would like to work on, and this is what has driven me to do research on developing compilers.  In order to implement these ideas, I'm going to need to write a few compiler front-ends for some of the more popular .NET languages.  Granted, there are some tools to help me do this (e.g. Lex, Yacc, Flex, Bison, Coco/R, ANTLR), but it would be so much more productive and consistent if the language designers would publish at least their compiler's front-end.

Tags:   ,
Categories:   Programming
Actions:   E-mail | | Permalink | Comments (0) | Comment RSSRSS comment feed

Sloppy Code

Thursday, 14 February 2008 12:37 by jordan.terrell

I couldn't have said it better myself.

Categories:   Programming
Actions:   E-mail | | Permalink | Comments (1) | Comment RSSRSS comment feed

iSynaptic.Commons - Cloneable<T>

Thursday, 24 January 2008 10:28 by jordan.terrell

So as I was working to increase the code coverage for my Commons framework, I found some gnarly edge cases that caused issues in my Cloneable<T> class - specifically dealing with value types.  I have since fixed those.

Cloneable<T> is really the guts of my Transactional<T> class, but I figured that it could be useful as a separate class.  When you start a transaction, Transactional<T> will use Cloneable<T> to create a copy of the object you want to be transactional.  Now historically the way that I've seen this done is to use in-memory binary serialization make a copy.

This is both slow, and consumes unnecessary additional memory.  Really, what you want to be able to do is create an instance of an object, and then copy the contents of it's fields from the source to the clone.

In order to create an instance of an object that might not have a parameter-less constructor, I use FormatterServices.GetSafeUninitializedObject(Type type).  This allows you to create an instance of an object without calling a constructor.  I first learned about this when reviewing how ObjectBuilder works.

Once you've got an instance of the object, you could just use Reflection to copy the contents of the fields, however that is slow.  Enter DynamicMethods!  DynamicMethods allows you to create a method on the fly, and tack it on to a class.  Using Reflection *once* to enumerate what the fields are, I build a method that can copy the fields from the source to the clone by emitting IL (Intermediate Language).  When you call the Clone(T source) or ShallowClone(T source) method it calls this dynamically emitted method.

DynamicMethods have a performance profile *almost* identical to that of normal compiled source code.  I did a test cloning 1 million instances of a class that had 6 fields - using in-memory binary serialization it took roughly 2 minutes (just shy of it by a few seconds) - using Cloneable<T> it took about 0.5 seconds!!!  This bodes well for my Transactional<T> class - creating copies of even complex objects is now a very cheap operation.  Is still have to investigate the Code Access Security (CAS) ramifications of emitting the DynamicMethod, but I still think it is a better approach.

I'm still looking to release my Commons framework soon.  Code coverage is at 72% now, with Cloneable<T> at 100% covered (I'm sure there are still more edge cases I haven't tested).  Once I'm comfortable with the code coverage, I will release it.  I'm thinking of releasing it under something like the BSD or MIT license - because I want it to be usable for commercial purposes - but I haven't decided yet.

Any license you would like to see it released under?  (and don't say GPL or anything very similar - not going to happen!)

Categories:   .NET | Programming
Actions:   E-mail | | Permalink | Comments (1) | Comment RSSRSS comment feed

iSynaptic.Commons - CTP coming soon

Monday, 14 January 2008 10:02 by jordan.terrell

Lately I've been working on a "Commons" framework that I am planning to release to the public.  There is two versions of the framework - one that targets the .NET 2.0 framework, and one that targets the 3.5 framework.  The goal is to create a framework that really leverages the new C# (or VB) language enhancements. I'm not sure how soon I will be releasing a CTP, but here is a list of things that it can do so far...

  • Custom Implementation of LINQ Standard Query Operators (partial implementation so far) so you can use LINQ when targeting .NET 2.0 framework (assuming you are using VS 2008).
  • Numerous extension methods, all in a separate namespace "iSynaptic.Commons.Extensions"
    • IEnumerable<T>.WithIndex()
    • IEnumerable<T>.LookAheadable()
    • IEnumerable<T>.Delimit(string delimiter)
    • Action<...>.Curry<...>(...)
    • Action<T>.MakeConditional(Predicate<T> condition)
    • ICollection<T>.Remove(params T[] itemsToRemove)
    • Enum.IsDefined() (useful/simpler as a extension method)
    • Func<...>.Curry<...>(...)
    • Func<T>.MakeConditional(Predicate<T> condition)
    • Func<T>.ToAction()
    • ...and many more
  • An implementation of the Specification pattern
  • Scope<T> and NestableScope<T> implementation
  • ReadOnlyDictionary<TKey, TValue>
  • Cloneable<T> - uses dynamic IL generation to clone objects really fast!
  • Transactional<T> - makes use of Cloneable<T> to create transactional objects that support System.Transactions
  • ScanningTextReader and SimpleScanner for custom text parsing
  • ProcessingInstructionParser to parse XML processing instructions when you are using XmlReaders
  • ...and much more

I've got a number of things I plan to implement, but this is what is mostly functional now.  I want to get testing code coverage up a little higher (I'm at 66%), so stay tuned...

Is Visual Basic, the language, in its golden years?

Monday, 12 November 2007 11:26 by jordan.terrell

I may be setting up myself for a flame war, but I just had to ask this question.

Visual Basic has been around for some time, and will soon be releasing version 9.0 of the language.  Visual Basic has gone through two major platform shifts - first with the move to COM, and then with the move to .NET.  Through each of these shifts, the language has had to change to participate in these platform changes.  When I recall the 1.0 release of .NET, many Visual Basic 6 programmers viewed VB.NET as an entirely different language because sheer number of changes to the language.  However, many have successfully transitioned to the .NET platform and continued to move forward.

However, look at the new features in VB 9.0 - I have to say, in my opinion, the language is starting to look quite bloated - especially when you look at the XML literals features.

Can Visual Basic continue to survive shifts in platform directions, especially with the continued functional and dynamic focus of languages such as C#, F#, IronPython and IronRuby?

Is Visual Basic in its golden years?  What do you think?

Categories:   .NET | Programming
Actions:   E-mail | | Permalink | Comments (3) | Comment RSSRSS comment feed

UnitRun discontinued...

Thursday, 8 November 2007 13:57 by jordan.terrell

I was very saddened today to find out that my favorite unit testing front end was discontinued.  I wish that it was still available for download, even if it wasn't being supported.

JetBrains - can you please put UnitRun back up for download?

Categories:   .NET | Programming
Actions:   E-mail | | Permalink | Comments (4) | Comment RSSRSS comment feed

The Ring of Truth...

Wednesday, 24 October 2007 09:36 by jordan.terrell


Tags:   ,
Categories:   Programming
Actions:   E-mail | | Permalink | Comments (0) | Comment RSSRSS comment feed

Request to add operator to the C# language

Friday, 14 September 2007 11:01 by jordan.terrell

I just submitted a suggestion to add an operator to C#.  I kept the sample on the suggestion simple, but I'm really trying to enable a scenario like this:

   1:  public class SomeClass
   2:  {
   3:      private List<string> _Strings = null;
   5:      public List<string> Strings
   6:      {
   7:          get { return _Strings ?= new List<string>(); }
   8:      }
   9:  }


In this sample if _Strings was null, it would create a new list of strings, assign it to the _Strings variable, and return the newly created list from the getter.  However, if _Strings was not null, the getter would simple return the list referenced by the _Strings variable.

If you like this idea, go ahead and vote on it!

Categories:   .NET | Programming
Actions:   E-mail | | Permalink | Comments (2) | Comment RSSRSS comment feed

Excellent Post by Jeremy on Dependency Injection

Friday, 7 September 2007 13:23 by jordan.terrell

A very well reasoned, thought out post on the use of dependency injection:

The First Compile

Friday, 7 September 2007 11:19 by jordan.terrell

When I first create a .NET project (really any software project, not just .NET), I have a general rule that I try to follow throughout the project:

Anytime someone retrieves the project's source code, so long as they have the correct version of the compiler installed, the first compile should be flawless - no errors, no warnings.

Anytime I pull down a project/solution (open source or internal company project) and this is not the case, I already have a sour taste in my mouth, so to speak.  I've seen binary references that were not included with the source, tools needed to build that were missing, hard coded paths, and so on.

Having a continuous integration server build your project is a good way to ensure that this experience happens throughout a project's life cycle.  Plus, you don't have to deal with other developers interrupting you to ask: "Can you help me get this project to compile?"

Just something to think about...