The Future of F#: Type Providers

I watched with interested Don’s talk at PDC. This blog post is to help me put in perspective some of my initial thoughts on what type providers are, I’ve tried to write it so you don’t need to see the video of the session first, but obviously it will help if you have. Some of this is just my speculation – I have no insider information so I can speculate, but don’t take everything I say too seriously. The talk gave a preview of “Type providers”, an experimental feature in F# that will appear in a future version of F#. The aim of type providers give tighter integration between the F# programming language and external data sources, to allow external data source be accessed in strongly typed way. This is blog post to explain what they are and do a bit of speculation about how they work.

Lets start by framing what problem this new feature is trying to solve, accessing external data sources in a strongly typed way. Today there are roughly three you can take approaches to accessing external data sources in strongly typed languages like C# and F#:

1) Access the data in a weekly typed way and then load it into strongly typed classes. Typically you use one or more of the classes provided .NET framework for loading the data from the disk or network such as the StreamReader or the WebRequest to read the data and then there are various classes to help parsing the data, such as the XDocument class for parsing XML. This technique tends require the programmer to write quite a lot of code before the they can get there hands on the strongly typed data, also this code tends to be quite brittle and unless the programmer takes great care, it doesn’t resist changes to the data format well.

2) Use reflection. Typically a programmer creates strongly typed classes to contain the data they are interested, typically these classes will share a similar structure to the data they are expecting, depending on the circumstances it may or may not be necessary to create further class that describe how the data should be mapped to classes that will contain the data. They then need to write a module that will parse the incoming data and use reflection to create the appropriate instances of the classes to hold the data. A good example of this is FluentNHibernate or Entity Framework in code first mode, here the programmer defines the classes they are expecting to receive from the database then reflection is used to create them from the incoming data. This generally works better than writing the code to do this mapping by hand, but can still be problematic, the programmer often still needs to write quite a bit of code to generate the contain class and they still have to deal with problems of the code getting out of sync with the definition.

3) Code generation (“Microsoft love’s code generation” generation as Don put it), in this case some tool will generate code that represents that we are interested in, typically it will also provide some mechanism for loading data into these classes. There needs to be some kind of meta data that describes the data, typically this will be the database schema or an XSD that defines the format of the XML you are interested in. This approach probably requires the least amount of code to be written by the programmer, but is not without its problems. Firstly each code generation tool tends to take a slightly different approach, so the programmer has to spend time getting to know the tool. Secondly the tool must be integrated into the build process and for a smooth experience it must also be integrated into visual studio, which is expensive and time consuming. Finally, often the tools can generate poor quality objects that are difficult to work with, for instances early versions of the xsd.exe, a tool for generating code to interact with xml in a strongly typed way, generated class with fields but no properties and arrays instead of collection objects, meaning it was up to the programmer to initialize each field by hand.

As someone who has suffered at the hands of Microsoft’s love of code generation over the years (and continues to suffer), I’m very interested in anything that could smooth out this process.

While no approach is perfect, frameworks like NHibernate and Entity Framework have pushed what you can with the reflection or code generation approaches to there limits and when accessing relational data the experience is generally quite good. Less effort has been put into the experience when accessing external “web” data sources that return either XML or JSON data, so here the programmer often needs to do more work to access the in strongly typed way. Also, there are other data sources, such as the Windows WMI database that contains information about the operating system or the ubiquitous excel spread sheet, where virtually no effort has been made to expose them in a strongly typed way, so if a programmer needs to access the data they contain they most roll up there sleeves and do everything themselves.

So what exactly does a type provider do? And how does it aim to tackle this problem? As I said, I have no insider information here, so this does leave me free to speculate and make educated guesses, but it does mean that anything you read here should be taken with a large pinch of salt. So, to answer the “So what exactly does a type provider do?” question we need to understand a bit about the how the F# compiler works, so lets look a trivial example and examine the work the compiler needs to do to compile it.

Code Snippet
  1. let firstIdentifier = 1
  2. let secondIdentifier = 2
  4. let thirdIdentifier = firstIdentifier + secondIdentifier

On the first two lines we define two identifiers that references literal values. The compile will parse these two lines and generate an abstract syntax tree (AST), after various stages of checking and optimisation the AST, or a structure similar to it, will be used to generate the code that makes up the dll or executable that we’re compiling. The important thing to retain is the compile has an internal data structure where it stores the fact that two identifier definitions have been seen and along with the fact that these definitions point to literal values. The third identifier is more interesting, here we define an identifier that references the other two identifiers. The compiler will go though the same process of parse the line and generating an AST. This time the generated AST will contain an identifier definition but instead of pointing to a literal it will point to an expression that contains to identifier references. Once the compiler has recognised these identifier references it must go though the process of checking they are valid, to find there meaning. Basically it checks all the code already in scope along with all referenced dll’s to see if it can find a matching identifier, if the current scope contains open commands then it does extra work to append each of these to see if it can find a match. If the compiler finds a match then compilation will continue with the compile eventually using the identifiers definition to generate code, it doesn’t find a match it will generate a compile error. Of course, I’ve simplified things a bit, an identifier could reference lots of different things, functions, classes, methods, constructors, delegates etc. so the search and checking algorithm is quite complicated, but I believe the important thing to understand is that the compiler will search for the meaning of identifier reference and eventually use this definition to generate code.

A type provider extends this process by providing extra places for the compile to search for an identifier reference’s meaning, instead of just the code being compiled and the referenced dlls. A type provider is just a dll that contains a class that implements the following interface:

Code Snippet
  1. public interface ITypeProvider
  2. {
  3.     Type GetType(string name,
  4.                  BindingFlags bindingAttr);
  6.     Expression GetInvokerExpression(
  7.                   MethodBase syntheticMethodBase,
  8.                   ParameterExpression[] parameters);
  10.     event System.EventHandler Invalidate;
  12.     Type[] GetTypes();
  13. }

Presumably, the compiler will create an instance of this class can call the GetTypes method to find what extra types are in scope. Another presumption is that the F# team will provide some sort of library to make creating these Type objects easy. It will be the responsibly of the type provider to read the meta data that describes the data we’re interested in and generate the appropriate types that represent the data. This will give the programmer access to new namespaces and type definitions that don’t exist in any dll but have been created by the type provider. This part of the process is fairly clear from what’s described in the presentation (but doesn’t mean it won’t change in whatever finally gets released).

A more open point, that wasn’t really covered in the presentation, is what code the compiler will generate when the programmer access a type that is provided by a type provider rather than a type in a normal dll. So this is where the real speculation about how it works is. In the presentation Don talks about there being two modes for the type provider, a dynamic mode and a generative mode. I’m guess that in the dynamic mode the compiler will generate code that will call back into the type provider each time a call to a type from a type provider is made, I think this is what the GetInvokerExpression in the ITypeProvider interface is all about. In the generative mode I would guess that compiler generates actual code for the types used and embeds them in the dll or executable being created. To this end once the dll or executable is created, it’s all just normal .NET code, and by this I mean IL, be executed. So presumably it will be up to the type provider to ensure the types in generates behave correctly and can connect to the data sources they present.

I’m very interested in how this new feature shapes up and can’t wait to get my hands on the bits whenever they become available. Don’s calling this the most important work he’s done, and for someone who is key in the design and implementation of generics into .NET that quite a statement. It does feel like this is something very new, while it’s true I don’t following programming language research as much as I used to, it’s hard to think of any other statically typed language that has a feature quite like this. I also think that this could make working with external data sources much easier and could lead to some very good things, but there's also a number of interesting issues the F# team will need to tackle (and these are just my initial thoughts without having played with the technology so please don’t read too much into this):

- As type providers allow you to execute code inside your compilation process it looks like they’ll need to be very robust, while the compiler can protect itself from exceptions throw by a type provider its much more difficult to protect against a type provider that doesn’t return when called, or simply takes a very long time to return or worse yet does something that brings down the process. Add to this the fact type providers will run inside visual studio to provide IntelliSense, this could lead to type providers slowing or crashing visual studio (although the same is true of custom msbuild tasks and various other VS plugins).

- It’s often possibly to represent external data sources in several different ways, relational databases and how they are mapped to objects are a classic example of this. How the programmer will be able to control the mappings, or even if they will have any control at all, is unclear at the moment. Also the type provider model looks great for reading data from external sources, I’m unsure how well it will handle writing to external data sources. I’m experience with ORM mappings tells me that this is the harder part as there tends to me more ambiguity about how the data should be saved.

- It looks like if you use type providers you’ll be taking dependencies on bazar artefacts in your build process. To me it looks a type provider need an XSD or some other sort of meta data about the data to be able to compile, generally this is little different to if you took a classic code generation approach, however there looks like there might be more edge cases. In the video Don demonstrations using WMI schema with a type provider, presumably the WMI schema is specific to the machine and will change between the various versions of Windows, this could lead to a program that will compile on some versions of windows and not others.

- It’s clear they won’t be able to tackle all issues of connecting to external data sources, such as versioning and availability, but then none of the current data access techniques have great stories for this – it’s just a hard problem.

However, all in all a very interesting, and indeed ground breaking, technology that will allow you to extend the F# compiler in some very interesting ways (some, that probably no-one has even thought of yet).

dotnetkicks+, digg+, reddit+,, dzone+, facebook+

Print | posted @ Monday, November 15, 2010 2:55 AM

Comments on this entry:

No comments posted yet.

Your comment:

(Note: all comments are moderated so it may take sometime to appear)

Italic Underline Blockquote Hyperlink
Please add 7 and 3 and type the answer here: