Creating comparison charts for stocks with FSharp Charting, Deedle and Yahoo Finance

Uncategorized 2 Comments

When you want to visualize how a stock or portfolio has performed historically relative to the market as a whole, it is useful to create a comparison chart.

This blog shows how to create a line chart to compare two stocks with Deedle, FSharp Charting and F# Data.

In this example, the chart will show the perfomance of ANZ.AX relative to the ASX ALL ORDINARIES index (^AORD) over a three year period from 2011-1-1 to 2014-1-1.

Starting Out

Deedle, FSharp.Charting and FSharp.Data are available on NuGet, so in the Package Manager Console, execute

Install-Package Deedle

and

Install-Package FSharp.Charting

and

Install-Package FSharp.Data

Create a new fsx file in the project, and call it ComparisonChart.fsx

This post will be structured in source code chunks, and can be consecutively paste them into the fsx file.

Functions to fetch and prepare the data

First we start with some functions to fetch and prepare the data to be charted.

#load "../packages/FSharp.Charting.0.90.6/FSharp.Charting.fsx"

#r @"../packages/FSharp.Data.2.0.9/lib/net40/FSharp.Data.dll"
#r @"../packages/Deedle.1.0.0/lib/net40/deedle.dll"

open System
open System.Collections.Generic
open System.Data
open Deedle
open FSharp.Charting
open FSharp.Charting.ChartTypes
open FSharp.Data

let getHistoricalData symbol (fromDate:DateTime) (toDate:DateTime)  = 
    let url = sprintf "http://ichart.finance.yahoo.com/table.csv?s=%s&a=%i&b=%i&c=%i&d=%i&e=%i&f=%i&g=d&ignore=.csv"
                    symbol (fromDate.Month - 1) (fromDate.Day) (fromDate.Year) (toDate.Month - 1) (toDate.Day) (toDate.Year)

    let data = CsvProvider<IgnoreErrors=true,Sample="Date (Date),Open (float),High (float),Low (float),Close (float),Volume (int64),AdjClose (float)">.Load url

    data.Rows
    |> Frame.ofRecords
    |> Frame.indexColsWith data.Headers.Value 
    |> Frame.indexRowsDate "Date"
    |> Frame.sortRowsByKey

let asPercentageGain (data:Series<DateTime, float>) =
    let firstItem = data |> Series.firstValue
    let percentageGain = data |> Series.mapValues (fun x -> (x - firstItem) / firstItem )
    percentageGain |> Series.observations

getHistoricalData returns the daily price data for the given symbol from yahoo finance. The Yahoo Finance URL format is pretty cryptic, so a resource such as http://www.gummy-stuff.org/Yahoo-data.htm is important if you want to figure out what the query string parameters mean.

The Yahoo Finance data is fetched in CSV format, and is parsed by the FSharp.Data CSV type provider. Note that when CsvProvider is instantiated, a sample is given describing the fields in the CSV returned by Yahoo Finance along with type names in brackets. Using a type provider significantly simplifies fetching the CSV data, cutting down on the amount of boiler plate code.

The CsvProvider represents data as tuples, so we need to include “Frame.indexColsWith ata.Headers.Value” in the piping statements in order to ensure the column headers are correct (i.e. Date, Close, etc), otherwise the headers will be named like Item1, Item2, etc.

asPercentageGain transforms a price series into a series where each entry represents the percentage gain or loss from the price on the first day in the series. The result of this can be used to build comparison charts.

Fetching the data and creating the chart

Now we’re ready to start creating the chart. Note that to properly compare performance, the same fromDate must be used for all stocks, so that both charts start at a gain of 0%

Note that the chart is Y-Axis is formatted using the Percent (“P”) Format Specifier with 0 decimal places (“P0”)

let fromDate = DateTime(2011, 1, 1)
let toDate = DateTime(2014, 1, 1)

let anz = getHistoricalData "ANZ.AX" fromDate toDate
let aord = getHistoricalData "^AORD" fromDate toDate

Chart.Combine 
    [ Chart.Line(anz?Close |> asPercentageGain, Name = "ANZ.AX")
      Chart.Line(aord?Close |> asPercentageGain, Name = "^AORD") ]
|> Chart.WithLegend(Enabled = true, InsideArea = false)
|> Chart.WithYAxis(LabelStyle = new LabelStyle(Format = "P0"))

If all goes well, running the FSX script should generate a chart such as:

Comparison Chart ANZ.AX vs AORD

A Basic Stock Trading Backtesting System for F# using Ta-Lib and FSharp.Data

Development 1 Comment

This article is written for the intermediate F# audience who has a basic familiarity of stock trading and technical analysis, and is intended to show the basics of implementing a backtesting system in F#.

If you’re an F# beginner it may not take too long for you to get up to speed on the concepts if you check out a few resources. In particular:

The backtesting strategy which you will implement is a simple SMA crossover where the fast line is 10 bars and the slow one 25. When the fast line crosses above the slow line, a buy signal will be triggered, and when the fast line cross below the slow line, a sell signal will be triggered. Only one long position will exist at any time, so the system will not trigger another buy order until after the long position is sold.

Feel free to get in touch and send through any improvements, questions or corrections.

Starting Out

Both Ta-Lib and FSharp.Data are available on NuGet, so in the Package Manager Console, execute

Install-Package FSharp.Data

and

Install-Package TA-Lib

Create a new fsx file in the project, and call it SimpleBacktest.fsx

This post will be structured in source code chunks, and can be consecutively paste them into the fsx file.

#r "System.Data.Entity.dll"
#r "FSharp.Data.TypeProviders.dll"
#r "System.Data.Linq.dll"
#r @"C:\Source\StockScreener\packages\FSharp.Data.2.0.3\lib\net40\FSharp.Data.dll"
#r @"C:\Source\StockScreener\packages\TA-Lib.0.5.0.3\lib\TA-Lib-Core.dll"

open System
open System.Collections.Generic
open System.Data
open System.Data.Linq
open FSharp.Data
open Microsoft.FSharp.Data.TypeProviders

Visual Studio 2012 has some Intellisense issues with F# FSI files and relative reference paths, so the references to Fsharp.Data.dll and Ta-Lib-Core.dll need to be their full path. Replace C:\Source\StockScreener\ with your project path, so that the full paths point to the appropriate DLLs.

Fetching the Data

type TaLibPrepData =
    { Symbol : string;
      Date : DateTime[];
      Open : float[];
      High : float[];
      Low : float[];
      Close : float[]; }

module StockData =
    type Stocks = CsvProvider<AssumeMissingValues=true,IgnoreErrors=true,Sample="Date (Date),Open (float),High (float),Low (float),Close (float),Volume (int64),Adj Close (float)">

    let getHistoricalData symbol (fromDate:DateTime) (toDate:DateTime)  =
        let url = sprintf "http://ichart.finance.yahoo.com/table.csv?s=%s&a=%i&b=%i&c=%i&d=%i&e=%i&f=%i&g=d&ignore=.csv"
                    symbol (fromDate.Month - 1) (fromDate.Day) (fromDate.Year) (toDate.Month - 1) (toDate.Day) (toDate.Year)
        Stocks.Load(url).Rows |> Seq.toList |> List.rev

    let getTaLibData symbol (fromDate:DateTime) (toDate:DateTime) =
        let historicalData = getHistoricalData symbol fromDate toDate
        {
            Symbol = symbol;
            Date  = historicalData |> Seq.map (fun x -> x.Date)  |> Seq.toArray;
            Open  = historicalData |> Seq.map (fun x -> x.Open)  |> Seq.toArray;
            High  = historicalData |> Seq.map (fun x -> x.High)  |> Seq.toArray;
            Low   = historicalData |> Seq.map (fun x -> x.Low)   |> Seq.toArray;
            Close = historicalData |> Seq.map (fun x -> x.Close) |> Seq.toArray;
        } : TaLibPrepData

Methods in the TA-Lib .NET wrapper expect arrays in their arguments, the TaLibPrepData record type was created so that the data can be mapped into it’s labels, ready to be used in the TA-Lib wrapper methods.

Yahoo Finance data is fetched in CSV format, and is parsed by the FSharp.Data CSV type provider. Note that when CsvProvider is instantiated, a sample is given describing the fields in the CSV returned by Yahoo Finance along with type names in brackets. Using a type provider significantly simplifies fetching the CSV data, cutting down on the amount of boiler plate code.

The URL format is pretty cryptic, so a resource like http://www.gummy-stuff.org/Yahoo-data.htm is important if you want to figure out what the query string parameters mean.

Ta-Lib Wrapper

type TaLibOutReal = 
    { OutReal : float array; OutBegIndex : int }

module TaLibWrapper =
    open TicTacTec.TA.Library

    // getAllocationSize returns an integer value with the exact allocation size needed for the TA-Lib outReal array. See: http://ta-lib.org/d_api/d_api.html
    let getAllocationSize (lookback:int) (startIdx:int) (endIdx:int) =
        let temp = Math.Max(lookback, startIdx)
        if temp > endIdx then 0 else endIdx - temp + 1

    // Simple Moving Average. See: http://ta-lib.org/function.html
    let sma timePeriod data =
        let startIdx = 0
        let endIdx = data.Close.Length - 1

        let mutable outBegIdx = 0
        let mutable outNBElement = 0

        let lookback = Core.SmaLookback(timePeriod)
        let allocationSize = getAllocationSize lookback startIdx endIdx
        let mutable outReal : float array = Array.zeroCreate allocationSize

        let retCode = Core.Sma(startIdx, endIdx, data.Close, timePeriod, &outBegIdx, &outNBElement, outReal)

        if retCode <> Core.RetCode.Success then
            invalidOp (sprintf "AssertRetCodeSuccess")

        { OutReal = outReal; OutBegIndex = outBegIdx } 

TaLibWrapper.sma returns SMA data where the timePeriod is in bars. For this example, the SMA is calculated over the bar’s closing price. startIdx and endIdx define the range of the data.Close array that the SMA will be calculated over. In this case, we’re processing the entire array of closing prices.

The variable names such as outReal, outNBElement are intended to be consistent with the TA-Lib source code and examples. outBegIndex represents the bar offset of outReal[0] – i.e. if the SMA has a timePeriod of 10 bars, 10 leading bars of data are needed before the first SMA value can be calculated, so outReal[0] will be the SMA at bar 10. The value of outBegIndex is returned along with outReal as both are used later in this example.

In order to avoid allocating an outReal array any larger than necessary, we calculate the allocation size via getAllocationSize. If we didn’t calculate the allocation size needed for the outReal array, there would be an extra 10 elements at the end of the array.

Order Management

type OrderType =
    | Buy
    | Sell

type Order =
    { Date : DateTime;
      Symbol : string;
      Type : OrderType;
      Quantity : int;
      Price : double; }
    member this.Total = this.Price * double this.Quantity

let ninjaTraderDaysOffset = 2.0
let ninjaTraderBarOffset = 1

let createOrder orderType quantity (data : TaLibPrepData) bar =
    { Order.Date = (data.Date.[bar]).AddDays(ninjaTraderDaysOffset); Symbol = data.Symbol; Type = orderType; Quantity = quantity; Price = data.Open.[bar+ninjaTraderBarOffset]; }

type OrderMatch = { Buy:Order; Sell:Order; }

let getOrderMatches (orders:Order seq) =
    if Seq.exists (fun (x:Order) -> x.Quantity <> (Seq.head orders).Quantity) orders then
        invalidOp "All order quantities must match"

    let buyOrders = Seq.filter (fun x -> x.Type = OrderType.Buy) orders
    let sellOrders = Seq.filter (fun x -> x.Type = OrderType.Sell) orders

    if (Seq.length buyOrders <> Seq.length sellOrders) then
        invalidOp "The number of buy orders must match the number of sell orders"

    Seq.zip buyOrders sellOrders |> Seq.map (fun x -> { OrderMatch.Buy = fst x; Sell = snd x } ) |> Seq.toList

getOrderMatches matches buy and sell orders and returns a list of trades. The implementation is a simple one and only supports matching where the buy and sell quantity both match. In other backtesting scenarios, the user may want to queue up multiple buy orders, then make a large sell order, or vice versa – getOrderMatches could be extended to support this.

During development, I found it useful to run the same backtesting strategy in NinjaTrader and compare the results for correctness. The values of ninjaTraderDaysOffset and ninjaTraderBarOffset were used in order to get results which matched the output of NinjaTrader. See the section The Equivalent Ninjatrader Strategy below for further details.

The basic strategy for matching orders is to split the buy and sell orders into two separate lists, then use Seq.zip to match them together. The orders are already sorted by date. Note that Seq.zip returns a tuple, where fst x is the first element and snd x is the second element.

Trade Management

type Trade =
    { Symbol : string; Quantity : int;
      EntryDate : DateTime; EntryPrice : double;
      ExitDate : DateTime; ExitPrice : double;
      CumulativeProfit : double; }
    member this.EntryTotal = this.EntryPrice * double this.Quantity
    member this.ExitTotal = this.ExitPrice * double this.Quantity
    member this.Profit = (this.ExitPrice - this.EntryPrice) / this.EntryPrice

let createTrade (buy:Order) (sell:Order) (previousTrade:Trade option) =
    if (buy.Quantity <> sell.Quantity)
        then invalidOp "All order quantities must match"

    let Profit = (sell.Price - buy.Price) / buy.Price;

    let cumulativeProfit =
        match previousTrade with
        | Some previousTrade -> (1.0 + previousTrade.CumulativeProfit) * ( 1.0 + Profit) - 1.0
        | None -> Profit

    { Trade.Symbol = buy.Symbol; Quantity = buy.Quantity;
      EntryDate = buy.Date; EntryPrice = buy.Price;
      ExitDate = sell.Date; ExitPrice = sell.Price;
      CumulativeProfit = cumulativeProfit; }

let getTrades (orderMatches:OrderMatch list) =
    let rec getRemainingTrades (lastTrade:Trade) (matches:OrderMatch list) (acc:Trade list) =
        match matches with
            | [] -> List.rev acc
            | x::xs ->
                let trade = createTrade x.Buy x.Sell (Some lastTrade)
                getRemainingTrades (trade) xs (trade::acc)

    let firstTrade = createTrade orderMatches.Head.Buy orderMatches.Head.Sell None
    firstTrade :: getRemainingTrades firstTrade orderMatches.Tail []

There are two sides to a trade, so an assertion is made at the beginning of createTrade to ensure the buy and sell quantity both match.

In order to calculate the cumulative profit, I need to know the cumulative profit of the previous trade. For the first trade, the cumulative profit is the same as profit, however for subsequent trades, the cumulative profit is calculated via (1.0 + previousTrade.CumulativeProfit) * ( 1.0 + Profit) – 1.0 the result is a percentage value i.e. 0.13 = 13%

The calculation of the cumulative profit is done via tail recursion and an accumulator (acc:Trade list)

Trading Performance

type PerformanceSummary = 
    { Symbol : string;

      StartDate : DateTime; 
      EndDate : DateTime;

      TotalNetProfit : double; 
 
      GrossProfit: double; 
      GrossLoss : Double;

      TotalNumberOfTrades : int;
      WinningTrades : int;
      LosingTrades : int;

      CumulativeProfit : double; }

module TradingPerformance =
    let profitOrLoss = (fun (x:Trade) -> x.ExitTotal - x.EntryTotal)

    let totalNetProfit (trades:Trade list) = trades |> List.sumBy profitOrLoss

    let winningTrade = (fun (x:Trade) -> x.ExitPrice > x.EntryPrice)
    let losingTrade = (fun (x:Trade) -> x.ExitPrice < x.EntryPrice)

    let getGrossProfit (trades:Trade list) = trades |> List.filter winningTrade |> List.sumBy profitOrLoss
    let getGrossLoss (trades:Trade list) = trades |> List.filter losingTrade |> List.sumBy profitOrLoss

    let getPerformanceSummary (trades:Trade list) =
        let firstTrade = Seq.head trades
        let lastTrade = Seq.last trades

        { PerformanceSummary.Symbol = firstTrade.Symbol;
          StartDate = firstTrade.EntryDate; EndDate = lastTrade.EntryDate;
          TotalNetProfit = totalNetProfit trades; GrossProfit = getGrossProfit trades; GrossLoss = getGrossLoss trades;
          TotalNumberOfTrades = List.length trades; WinningTrades = List.filter winningTrade trades |> List.length; LosingTrades = List.filter losingTrade trades |> List.length;
          CumulativeProfit =  lastTrade.CumulativeProfit}

Strategy Implementation

let barValue (data:TaLibOutReal) bar barsAgo =
    data.OutReal.[bar - barsAgo - data.OutBegIndex]

let crossAbove (series1:TaLibOutReal) (series2:TaLibOutReal) bar lookback =
    (barValue series1 bar lookback) <= (barValue series2 bar lookback) && (barValue series1 bar 0) > (barValue series2 bar 0)

let crossBelow (series1:TaLibOutReal) (series2:TaLibOutReal) bar lookback =
    (barValue series1 bar lookback) >= (barValue series2 bar lookback) && (barValue series1 bar 0) < (barValue series2 bar 0)

module SmaCrossoverStrategy =
    let getOrders symbol (fromDate:DateTime) (toDate:DateTime) fastPeriod slowPeriod =
        let data = StockData.getTaLibData symbol fromDate toDate

        let smaFast = TaLibWrapper.sma fastPeriod data
        let smaSlow = TaLibWrapper.sma slowPeriod data

        let quantity = 100

        let tradeIsOpen = ref false

        seq { 
            for bar in smaSlow.OutBegIndex + 1 .. smaSlow.OutReal.Length-1 do
                if (!tradeIsOpen = false) && (crossAbove smaFast smaSlow bar 1) then 
                    tradeIsOpen := true
                    yield (createOrder OrderType.Buy quantity data bar)

                else if (!tradeIsOpen = true) && (crossBelow smaFast smaSlow bar 1) then 
                    tradeIsOpen := false
                    yield (createOrder OrderType.Sell quantity data bar) 

            if !tradeIsOpen = true then
                let bar = smaSlow.OutReal.Length-1
                yield (createOrder OrderType.Sell quantity data bar) 
        }

The SmaCrossoverStrategy should be fairly self explanatory and is written in an imperative programming style. getOrders iterates over each bar, checking for SMA crossover, and creates an order if there is no current open trade.

Finally, after all the bars have been processed, if there is a trade still open, a Sell order is created with the price at the final bar in the data series.

Displaying Backtesting Performance

let fromDate = new DateTime(2000, 1, 1)
let toDate = new DateTime(2003, 1, 1)
let symbol = "ANZ.AX"
let fastPeriod = 10
let slowPeriod = 25

let orders = SmaCrossoverStrategy.getOrders symbol fromDate toDate fastPeriod slowPeriod |> Seq.toList
let orderMatches = getOrderMatches orders 
let trades =  orderMatches |> getTrades |> Seq.toList
let tradingPerformance = TradingPerformance.getPerformanceSummary trades

let asCurrency (v:double) = v.ToString("$0.00")
let asIsoDate (v:DateTime) = v.ToString("yyyy-MM-dd")
let asPercentage (v:double) = v.ToString("0.00%")

let printTrade tradeNumber trade =
    printfn "%7i | %8i | %-11s | %10s | %10s | %s | %6s | %10s" (tradeNumber+1) trade.Quantity (asCurrency trade.EntryPrice) (asCurrency trade.ExitPrice) (asIsoDate trade.EntryDate) (asIsoDate trade.ExitDate) (asCurrency trade.Profit) (asPercentage trade.CumulativeProfit)

The format specifiers in the printfn statement such as %7i are for controlling width and alignment so that you’ll get a nicely formatted table

If you then then run the following code:

printfn "Trade-# | Quantity | Entry Price | Exit Price | Entry Date | Exit Date  | Profit | Cum.Profit "
trades |> Seq.iteri (printTrade)

You should see the following output:

Trade-# | Quantity | Entry Price | Exit Price | Entry Date | Exit Date  | Profit | Cum.Profit 
      1 |      100 | $10.30      |     $11.64 | 2000-03-15 | 2000-05-13 |  $0.13 |     13.01%
      2 |      100 | $12.20      |     $12.50 | 2000-06-01 | 2000-06-30 |  $0.02 |     15.79%
      3 |      100 | $12.42      |     $12.79 | 2000-07-07 | 2000-09-22 |  $0.03 |     19.24%
      4 |      100 | $13.51      |     $14.25 | 2000-10-06 | 2000-12-22 |  $0.05 |     25.77%
      5 |      100 | $15.07      |     $14.53 | 2001-02-03 | 2001-03-17 | -$0.04 |     21.26%
      6 |      100 | $13.65      |     $13.70 | 2001-04-18 | 2001-04-19 |  $0.00 |     21.71%
      7 |      100 | $14.00      |     $14.04 | 2001-04-27 | 2001-05-02 |  $0.00 |     22.05%
      8 |      100 | $14.10      |     $15.58 | 2001-05-04 | 2001-07-12 |  $0.10 |     34.87%
      9 |      100 | $16.46      |     $16.00 | 2001-08-05 | 2001-09-14 | -$0.03 |     31.10%
     10 |      100 | $16.95      |     $17.56 | 2001-10-07 | 2001-11-30 |  $0.04 |     35.81%
     11 |      100 | $18.57      |     $17.07 | 2001-12-27 | 2002-01-16 | -$0.08 |     24.84%
     12 |      100 | $17.60      |     $17.65 | 2002-02-06 | 2002-03-15 |  $0.00 |     25.20%
     13 |      100 | $17.86      |     $18.02 | 2002-04-11 | 2002-04-14 |  $0.01 |     26.32%
     14 |      100 | $18.33      |     $19.00 | 2002-04-19 | 2002-05-30 |  $0.04 |     30.94%
     15 |      100 | $19.38      |     $19.35 | 2002-06-01 | 2002-06-28 |  $0.00 |     30.74%
     16 |      100 | $18.65      |     $17.60 | 2002-08-18 | 2002-09-29 | -$0.06 |     23.37%
     17 |      100 | $18.50      |     $18.50 | 2002-10-27 | 2002-11-22 |  $0.00 |     23.37%

The Equivalent Ninjatrader Strategy

The output you get from the above backtesting system should give similar output to NinjaTrader’s backtest a strategy feature assuming you only buy long, and maintain one trade at a time.

You can use the following strategy in NinjaTrader, and if you configure the strategy in the same way as the F# code, the Trades tab will show the same output.

#region Using declarations
using System;
using System.ComponentModel;
using System.Diagnostics;
using System.Drawing;
using System.Drawing.Drawing2D;
using System.Xml.Serialization;
using NinjaTrader.Cbi;
using NinjaTrader.Data;
using NinjaTrader.Indicator;
using NinjaTrader.Gui.Chart;
using NinjaTrader.Strategy;
#endregion

// This namespace holds all strategies and is required. Do not change it.
namespace NinjaTrader.Strategy
{
    /// <summary>
    /// Enter the description of your strategy here
    /// </summary>
    [Description("Enter the description of your strategy here")]
    public class SmaCrossoverStrategy : Strategy
    {
#region Variables
		private int		fast	= 10;
		private int		slow	= 25;
		#endregion

		/// <summary>
		/// This method is used to configure the strategy and is called once before any strategy method is called.
		/// </summary>
		protected override void Initialize()
		{
			SMA(Fast).Plots[0].Pen.Color = Color.Orange;
			SMA(Slow).Plots[0].Pen.Color = Color.Green;

            Add(SMA(Fast));
            Add(SMA(Slow));

			CalculateOnBarClose	= true;
		}

		/// <summary>
		/// Called on each bar update event (incoming tick).
		/// </summary>
		protected override void OnBarUpdate()
		{
			if (CrossAbove(SMA(Fast), SMA(Slow), 1))
			    EnterLong();
			else if (CrossBelow(SMA(Fast), SMA(Slow), 1))
			    ExitLong();
		}

		#region Properties
		/// <summary>
		/// </summary>
		[Description("Period for fast MA")]
		[GridCategory("Parameters")]
		public int Fast
		{
			get { return fast; }
			set { fast = Math.Max(1, value); }
		}

		/// <summary>
		/// </summary>
		[Description("Period for slow MA")]
		[GridCategory("Parameters")]
		public int Slow
		{
			get { return slow; }
			set { slow = Math.Max(1, value); }
		}
		#endregion
    }
}

Where to next?

This sample is intended to show the basics of implementing a backtesting system in F#. There are many ways this can be extended:

  • Extend the Ta-Lib wrapper to support ADX, Beta, EMA, MACD, etc. The TA-Lib wrapper could be made a lot more generic to reduce duplicate code when adding support for these methods.
  • Extend the order matching system to handle orders of different quantities
  • Calculate the performance summary (with beta, # winning trades, # losing trades, etc)
  • The system above only allows one long position at a time, extend the system to allow incremental buying based on a pre-set account size.
  • Implement other strategies such as:
    • Buying when the 50 day EMA crosses above the 200 day EMA and the current price is above the 50 day EMA, sell when the price drops below the 200 day EMA.
    • Buying on dips only when the stock is an uptrend and the dip is within 1% of the 200 day EMA

NullReferenceException when model binding strings after upgrading from ASP.NET MVC 1 to ASP.NET MVC 2

Uncategorized No Comments

When migrating a site from ASP.NET MVC 1 to ASP.NET MVC 2, you can generally follow the instructions in http://www.asp.net/whitepapers/what-is-new-in-aspnet-mvc#_TOC2, taking note of any breaking changes. This will take you most of the way there, however there are a few undocumented issues which you may uncover if you’re migrating a site with a substantial amount of code.

By default, the ASP.NET MVC 1 model binder would initialize strings to string.Empty whereas ASP.NET MVC 2 will initialize strings as NULL. This is an undocumented breaking change and will be a problem if you have a substantial amount of code relying on the original behavior – code that was previously working in production will start throwing NullReferenceException.

To preserve the original MVC 1 model binder behaviour, consider creating default model binder such as the following:

public class MVC1ModelBinder : DefaultModelBinder
{
    public override object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
    {
        bindingContext.ModelMetadata.ConvertEmptyStringToNull = false;
        return base.BindModel(controllerContext, bindingContext);
    }
}

and in global.asax.cs, make sure you have the following:

protected void Application_Start()
{
    ModelBinders.Binders.DefaultBinder = new MVC1ModelBinder();
}

ASP.NET MVC – issues with binding a non-sequential list with the default model binder

Development No Comments

For the ASP.NET MVC default model binder to bind a list successfully, the list must be sequential with unbroken indices.

For the following examples, the server side model will be

public class ItemsModel 
{
    public IList<string> Items { get; set; }
}

Non-sequential form submits when using indexes which don’t start from zero i.e. Item[2], Item[3], etc will result in incomplete form data being loaded by the model binder.

The following will not bind, and will result in the model being NULL:

<input type="text" name="Item[1].Name" value="Item1" />
<input type="text" name="Item[3].Name" value="Item2" />
<input type="text" name="Item[4].Name" value="Item3" />

The following will only partially bind, and will result in the model containing items 0, 1, 2:

<input type="text" name="Item[0].Name" value="Item0" />
<input type="text" name="Item[1].Name" value="Item1" />
<input type="text" name="Item[2].Name" value="Item2" />
<input type="text" name="Item[4].Name" value="Item3" />

whereas

<input type="text" name="Item[0].Name" value="Item0" />
<input type="text" name="Item[1].Name" value="Item1" />
<input type="text" name="Item[2].Name" value="Item2" />
<input type="text" name="Item[3].Name" value="Item3" />

will bind since the list is sequential.

Database version management and figuring out which scripts need to be run when deploying the latest version of your web app

Development No Comments

When you are maintaining multiple web apps across environments it can be difficult to keep track of which scripts need to be run to upgrade the database when it comes time to deploy. If you’re maintaining different versions of web apps across environments when the versions can sometimes be significantly out of sync, the difficulty of determining which update scripts need to be run on deployment can explode.

While there are a ton of approaches to keeping your database under version control, if you want something simple and effective that you can implement with minimal time and effort, consider a DatabaseVersionHistory table in your database.

A database version history table will allow you to see at a glance the state the database is in and by comparing it with the update scripts in source control, you will quickly be able to determine which update scripts need to be run.

The folder structure

All of your change scripts must be under version control for this to work successfully and they must be named sequentially. Consider the folder structure:

App Root
	\ SQL-Scripts
		\ 1.0.0
			\ 01 - Create-Initial-Database.sql
			\ 02 - Create-Database-Version-History-Table.sql
			\ 03 - Add-Users-Table.sql

		\ 1.1.0
			\ 01 - Add-Timesheet-Tables.sql

etc

Under the SQL-Scripts folder, you have folders 1.0 and 1.1. These should correspond to your application version, and whenever it changes, you should create a new SQL-Scripts folder for any new scripts. If you don’t currently have a good system for versioning your application, consider Semantic Versioning.

The Database Version History table

Consider the following for your DatabaseVersionHistory table:

CREATE TABLE [dbo].DatabaseVersionHistory(
   [Id] [int] IDENTITY(1,1) NOT NULL,
   [MajorReleaseNumber] integer NOT NULL,
   [MinorReleaseNumber] integer NOT NULL,
   [PatchReleaseNumber] integer NOT NULL,
   [ScriptName] [varchar](255) NOT NULL,
   [DateApplied] [datetime] NOT NULL,

    CONSTRAINT [PK_SchemaChangeLog]
        PRIMARY KEY CLUSTERED ([Id] ASC)
)

INSERT INTO DatabaseVersionHistory
       ([MajorReleaseNumber]
       ,[MinorReleaseNumber]
       ,[PatchReleaseNumber]
       ,[ScriptName]
       ,[DateApplied])
VALUES (1, 0, 0, '02 - Create-Database-Version-History-Table.sql', GETDATE())

The MajorReleaseNumber, MinorReleaseNumber, PatchReleaseNumber correspond to the conventions in Semantic Versioning i.e. 1.2.3 has a major release number of 1, a minor release number of 2, and a patch release number of 3.

Note the INSERT INTO after creating the table. This is how we track what script has been run. Every script should contain an INSERT INTO following the same convention.

For example, the 01 – Add-Timesheet-Tables.sql script would contain:

CREATE TABLE [dbo].Timesheet(
 ...etc...
)

INSERT INTO DatabaseVersionHistory
       ([MajorReleaseNumber]
       ,[MinorReleaseNumber]
       ,[PatchReleaseNumber]
       ,[ScriptName]
       ,[DateApplied])
VALUES (1, 1, 0, '01 - Add-Timesheet-Tables.sql', GETDATE())

Ok. It’s time to deploy a new version of our web app. Which database scripts do we need to run?

Execute a query such as:

SELECT *
FROM DatabaseVersionHistory
ORDER BY MajorReleaseNumber, MinorReleaseNumber, PatchReleaseNumber, ScriptName

This will give you a list of the scripts that have already been applied to the database. Based on the results of this query and comparing them to the scripts you have in source control, you will know exactly the state the database is in.

Where to next?

While this system is very quick and easy to get into place, there are many things that can be done to make the system more robust.

For example, all update scripts could start with a query to determine whether the script has already been executed or not. This could be done via querying the DatabaseVersionHistory for a match and stopping execution if a match was found. The system could then potentially be automated to execute all scripts as part of a deployment script.

A JIRA issue tracking FAQ for a small team

Uncategorized 1 Comment
This post is intended as a living document that will evolve and grow over time. If there’s something you think I missed or would like something clarified, please feel free to leave a comment.

Who should be reading this FAQ?

Managers, developers, testers, anybody working or contributing on a software project.

This article is generally JIRA specific, however concepts will carry across into other issue trackers such as BugzillaRedmineTeam Foundation Server (TFS) and Trac just fine.

When should a small team be using an issue tracker such as JIRA?

A young startup may initially get away just fine by working informally and by email, and early in the project you’ll want to have as little administrative burden as possible, however as a project and team matures, there will come a time when having a searchable, persistent audit history of business decisions, fixes (and why they were done) and completed tasks will become invaluable.

This audit history goes hand-in-hand with a good version control commit history.

Can’t I just email the person?

Sometimes. However if you email the person details of the issue, and if the issue ends up in JIRA, make sure you add all the relevant details into a JIRA issue too; JIRA issues should be complete and not refer to emails.

If you’re copy+pasting from an email and intend to quote somebody, you can use the {quote} Text Effect

General guidelines for deciding on the Issue Type

I recommend creating a new issue type of Observation. In the context of creating an issue, an observation is something you noticed in the system and you’re not sure if it’s a bug or not.

Unless you know it’s a genuine bug, Observation is a good Issue Type default. The issue type can always be changed later after the issue has been reviewed.

Anything administration related is usually a Task, any non-bug refinements can usually be classified as Improvements, and any upcoming features can usually be classified as New Features.

What to do when two people need to work on different parts of an issue

When an issue can be separated cleanly into more than one unit of work, create one parent issue and then create sub tasks as more granular tasks which can then be assigned to individual users.

In the case where the work can’t be separated cleanly, or it’s not obvious how to separate the work into sub tasks, create an issue as you would normally and assign to the person who is likely to be the best to start the investigation.

More info:

What to do when you’re finished with an issue

After you’ve fixed the issue assigned to you, on the issue page click on the Resolve button, setting the resolution to Fixed and leave a comment describing in general what was done.

When the fix has been deployed, ensure the issue is marked as Fixed with an appropriate comment and assign it back to the user who initially created the issue.

If the issue won’t be fixed for whatever reason, leave a comment describing why, mark the issue as Won’t Fix and assign back to the user who initially created the issue. Similar for the resolution types of Cannot Reproduce and Incomplete.

What to do when an issue is no longer an issue

Then it should be closed. This is usually done by the creator of the issue or the test team.

Why are we doing all this administrative work instead of just working on the code?

Six months or more down the track when you’re looking at some part of the functionality and wondering why something was done or the system is behaving in a certain way which isn’t understood by anybody on the team, an audit trail will come in very handy.

An audit history is not a panacea however it will help immensely with the question of “why did we do that?”

There are several scenarios where this audit history will be highly valuable:

  • The most common scenario is when there has been a lot of code and business rule churn with changing requirements and shifting priorities, where some features may have partially implemented or pivoted part of the way through development.
  • When the people who wrote the initial code have left the project. You can’t necessarily ask them why something was done.
  • When people have simply forgotten!

As a side note, version control commits also represent an audit history of the changes at a more technical level and include a reference to the JIRA issue when appropriate.

In summary, the JIRA audit history should consist of a searchable history of approvals, business decisions, implementation details. It should answer all your questions of why something was done.

The Visual Studio 2012 Open File Dialog Doesn’t Work

Uncategorized No Comments

After installing Visual Studio 2012, I found that the open file dialog wasn’t being displayed. I could open projects via Windows explorer, via the recent projects menu, compile, run etc, however neither Open Project or the File->Open->Project/Solution were working.

What was strange about this is that other dialogs such as New Project were working fine.

After much searching and testing, enabling the Tablet PC Input Service fixed the issue. This does not make much sense since I’m not using a tablet pc, however it works for me and may work for you.

To enable the service:

  1. Navigate Computer Management -> Services and Applications (or run services.msc)
  2. Find the Tablet PC Input Service in the list, right click on it and select Properties from the menu.
  3. In the properties dialog, set Startup Type to Automatic and click the Start button.
Some more relevant details are here: http://superuser.com/questions/174854/some-dialogs-do-not-work-anymore-in-windows-7

Using Mercurial with a SVN repository in a production environment without any drama

Uncategorized No Comments

Why would I want to use Mercurial or any other DVCS client with a Subversion repository?

  • It lets us keep SVN as our central repository
  • Some team members prefer not to use a DVCS for whatever reason so it lets them carry on using SVN without interruption.
  • It allows me to work and commit changes (but not push!), search history and switch between branches completely disconnected. I can continue to work during network outages or while traveling when I don’t have connectivity.
  • You get full, fast history search.
  • Switching between branches is easy and fast.
  • Any automated processes which use SVN (i.e. automated builds and deployments) can continue to operate while everyone moves to DVCS.
  • It’s much easier to perform merges than regular SVN (via export/import patch queues – which I detail later)

Why not use git-svn?

I personally prefer Mercurial with hgsubversion since, in my mind, the tooling in Windows is currently much more mature and I already have a hgsubversion workflow which is simple, robust and effective.

That being said, many people are happy with using git-svn and if you’re evaluating options, it may be worth giving it a go too!

General Overview

My local Mercurial repo contains the full history of the project with a full graph of branches – I use it to search full history, switch between branches, export/import patches, commit and push. I keep everything in a single C:\Source\my-project folder. no need for \trunk or \branches. There are many ways which you can use Mercurial with a SVN repository, each has it’s own caveats and edge-cases. This article is intended to record a way which I have found to be robust and hassle free.

I use TortoiseHg which you can get from http://tortoisehg.bitbucket.org/

TortoiseHg includes the command line client, so you don’t need to install it separately.

This guide assumes knowledge of Mercurial and SVN concepts and workflows and is intended to be a kind of a summary of things to keep in mind when using Mercurial to work with a SVN repository.

To interact with the SVN repository, I only need to use TortoiseHG’s hg workbench http://tortoisehg.bitbucket.org/manual/2.3/workbench.html . Generally I’ll keep the window open at all times on my second monitor, minimizing it when not needed.

More reading: http://tortoisehg.bitbucket.org/manual/2.3/workbench.html

Getting hgsubversion

You can get it from https://bitbucket.org/durin42/hgsubversion/overview/

This guide assumes that you will clone it to: C:\Apps\Mercurial\hgsubversion

I’m using hgsubversion at revision #821 (f28e0f54a6ef) so if in doubt and latest doesn’t seem to be working for you, try updating to this specific version.

More reading: http://mercurial.selenic.com/wiki/HgSubversion

Configuration

My mercurial.ini file in C:\Users\Matt looks like:

[extensions]
hgsubversion = C:\Apps\Mercurial\hgsubversion\hgsubversion
mq =
rebase =
hgext.bookmarks =
hgext.graphlog =
mercurial_keyring=

[ui]
username = mattbutton

[tortoisehg]
ui.language = en

[diff]
git = True

my hgrc file in C:\Source\my-project\.hg looks like:

[paths]
default = svn+https://path-to-my-project-repository

[tortoisehg]
postpull = update
autoresolve = False
closeci = True

[ui]
username = mattbutton

my .hgignore in C:\Source\my-project for ASP.NET MVC development in Visual Studio on Windows looks like:

syntax: glob

obj
[Bb]in
_Resharper.*
*.csproj.user
*.resharper.user
*.resharper
*.suo
*.cache
*~
*.swp
*.db
build
GlobalAssemblyInfo.cs
*.sqlite
#ignore thumbnails created by windows
Thumbs.db
#Ignore files build by Visual Studio
*.obj
*.exe
*.pdb
*.user
*.aps
*.pch
*.vspscc
*_i.c
*_p.c
*.ncb
*.suo
*.tlb
*.tlh
*.bak
*.cache
*.ilk
*.log
*.dbmdl
[Bb]in
[Dd]ebug*/
*.lib
*.sbr
obj/
bin/[Rr]elease*/
_ReSharper*/
[Tt]est[Rr]esult*
*.ReSharper

Cloning the Repository

It’s best to clone the entire repository – not just trunk. This way you can take advantage of switching between branches and full history search.

If you’re cloning a large repository with thousands of changesets, you can expect the initial clone to take a few hours.

I recommend that you zip up the repository after the initial clone and keep it as a backup in case something happens to the working repository. This way, you can extract it somewhere and pull without having to go through the time consuming initial clone of the SVN repository.

Import/Export Patch

It’s important that git patches are enabled. Add the following to your mercurial.ini

[diff]
git = True

Without this setting, if you add a new file, commit, then create a patch based on the commit, you’ll discover that the new file is not included in the patch. Enabling git diffs will avoid this problem altogether.

More reading: http://mercurial.selenic.com/wiki/GitExtendedDiffFormat

Branching and Merging

Do not ever use Mercurial for merging when dealing with a SVN repository. SVN only accepts a linear history, thus HG SVN cannot push merge changesets to a SVN repository and you’ll only end up with errors if you attempt this. There are two ways that you can get the same result without a merge.

If you’re working off of the trunk and you want to push your new changes, use the rebase function and deal with any merge conflicts. This will take all of your changesets which you haven’t yet pushed, and append them to the SVN head. You’ll then be able to push a linear history.

** TODO: add note about rebase onto SVN head

The general command line workflow for this is:

hg pull
hg rebase --svn
hg push

If you want to merge changes from one branch to trunk or vice-versa, the export/import patch functionality.

More reading: http://blog.kalleberg.org/post/2337246985/merging-a-mercurial-repository-back-into-subversion

Removing Unversioned Files

When switching between branches, you may end up with files which don’t belong in the revision which you’ve switched to. This may cause problems in your build process. To remove any unversioned files, you can use the ‘purge’ extension:

hg purge --all

When Things Go Wrong and You Can’t Push

Sometimes you’ll have issues pushing to the SVN repository. Perhaps an error like ”

If your changeset can’t be pushed and you’re getting an odd error which isn’t the usual change conflict, a workflow to fix this is:

  1. Pull the latest revisions
  2. Export the changesets which aren’t pushing as a series of patches
  3. Strip the changesets which aren’t pushing
  4. Import the patches onto the head
  5. Finalize the MQ
  6. Push.

Strip will remove the changeset and all it’s descendants.

NB: Strip rewrites history so you should only use it on changesets which haven’t been pushed. You should never attempt to strip a changeset which has been pushed to SVN.

There are other methods such as hg collapse, however I see these as being quite risky and error prone since you’re making destructive. The export/import patch method has been reliable and problem-free for me.

More reading:
http://mercurial.selenic.com/wiki/Strip
http://mercurial.selenic.com/wiki/CollapseExtension

SQL Profiler templates missing

Uncategorized 10 Comments

If you are connecting to a SQL server with the SQL profiler and none of your templates are showing up, compare the versions of the SQL profiler you are running and the version of SQL server that you’re connecting to; there is likely a version mismatch.

If this is the case, what’s likely happening here is that you’re connecting to a SQL 10.50 instance with a SQL 10.0 profiler and the profile templates for 10.50 aren’t present.

In the case of the profiler from SQL 2008 connecting to a SQL 2008 R2 instance, copy your 100 profile templates folder (default install is at C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Profiler\Templates\Microsoft SQL Server\100) into a new folder in the same location with the name “1050” i.e. C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Profiler\Templates\Microsoft SQL Server\1050.

Then try to reconnect, and you’ll have access to the profile templates and everything will work fine.

More information about SQL versions can be found at: http://sqlserverbuilds.blogspot.com/

Loading jQuery via HTTP or HTTPS depending on the request protocol without document.write

Uncategorized No Comments

When running a page with HTTPS, you’ll want to also load any external resources such as javascript via HTTPS. A lot of people recommend loading jQuery from the Google CDN via the following javascript script:

<script type="text/javascript">
    var gaJsHost = (("https:" == document.location.protocol) ? "https://" : "http://");
    document.write(unescape("%3Cscript src='" + gaJsHost + "ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js' type='text/javascript'%3E%3C/script%3E"));
    document.write(unescape("%3Cscript src='" + gaJsHost + "ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js' type='text/javascript'%3E%3C/script%3E"));
    document.write(unescape("%3Cscript src='" + gaJsHost + "ajax.googleapis.com/ajax/libs/swfobject/2.1/swfobject.js' type='text/javascript'%3E%3C/script%3E"));       
</script>

This works just fine, however you can let the browser select the protocol depending on the request by the following snippet:

<script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script>
<script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jqueryui/1.7.2/jquery-ui.min.js"></script>
<script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/swfobject/2.1/swfobject.js"></script>

This is obviously much cleaner, and you can see the full URL rather than javascript code to build up a string. The key here is the double slash within the src attribute. This kind of url works for any web resource and is particularly useful for loading resources from the Google CDN.

Icons by N.Design Studio. Designed By Ben Swift, modified by Matt Button. Powered by WordPress, and Free WordPress Themes
Entries RSS Comments RSS Log in