Working with the image module for PowerShell; part 3, GPS and other data

In Part one I showed how my downloadable PowerShell module can tag photos using related data – like GPS position – which was logged as they were being taken, and in part two I showed how I’d extended the module in James Brundage’s  PowerPackfor Windows 7. Now I want to explain the extensions which automate the processes of:

  • Getting the data logged by GPS units and similar devices
  • Reading each image file from the memory card and matching it to an entry in the log made at around the same time
  • Building up the set of EXIF filters filters based on the log entry.

The data and pictures are connected by the time stamp on each, but to connect properly the scripts must cope with any time difference between the camera’s clock and time on the logging device – whether that’s a GPS unit or my wrist mounted scuba computer. A few seconds won’t introduce much error, but the devices might be in different time zones –for example GPS works on Universal time (GMT) - so the offset is often hours, not seconds. My quick and dirty way of making a note of the difference is to photograph whatever is doing the logging (assuming it can display its time). The camera will record the time its own clock was set to in the EXIF “Date and time taken” field and subtracting that from the time displayed on the logger in the picture gives an offset to apply to all data points. The following is the core of a function named Set-Offset which could be seen in part one;

$RefDate = ([datetime]( Read-Host ("Please enter the Date & time " +
                                   "in the reference picture, formatted as" + [char]13 +
                        [Char]10 + "Either MM/DD/yyyy HH:MM:SS ±Z or " +
                                   "dd MMMM yyyy HH:mm:ss ±Z"))  
            ).touniversalTime()

$ReferenceImagePath = Read-Host "Please enter the path to the picture"
if ($ReferenceImagePath -and (test-path $ReferenceImagePath) -and $RefDate) {
     $picTime = (get-exif -image $ReferenceImagePath).dateTaken 
     $Global:offset = ($picTime - $refdate).totalSeconds
}

The real Set-Offset can take –refDate and –ReferenceImagePath parameters so the user doesn’t need to be prompted for them.  Most of code you can see is concerned with getting the user to enter the time (in a format that PowerShell can use) and the path to the file. The only part which uses the image module is
(get-exif -image $ReferenceImagePath).dateTaken `` Get-Exif is a command I added, and it returns an object which contains all the interesting EXIF data from the image file. Only the value in the DateTaken property is of interest here; it is used to calculate the number of seconds between the camera time and logger time and the result is stored in a global variable named $offset.

The next step is to read the data and applying the offset to it; depending on the how it was logged the next step will be either:
  $Points = Get-NMEAData   -Path $Logpath -offset $offset Or   $Points = Get-GPXData    -Path $Logpath -offset $offset Or 
  $Points = Get-CSVGPSData -Path $Logpath -offset $offset Or
  $Points = Get-SuuntoData -Path $Logpath -offset $offset

The last one handles the comma separated data exported from the Suunto Dive Manager program which downloads the data from my dive watch. The other 3 deal with different formats of GPS data, it may be in the form of NMEA sentences (comma separated again) or the CSV format used by Efficasoft GPS utilities on my phone or the XML-based GPX format. (GPS data formats are worth another post of their own). You may need to make slight alterations to these functions to work with your own logger, but they are easy to change.  All of them except Get-GPXdata import from a CSV file – and use a feature which is new in PowerShell V2 to specify the CSV column headings when using the import-csv command.  Get-GPXData uses XML documents looking for a hierarchy which goes <gpx> <trk> <trkseg><trkpt><trkpt><trkpt><trkpt>… All the functions use select-object to remove fields which aren’t needed and insert calculated data (for example converting the native speed in knots from GPS to MPH and KM/H )

After running one of these commands there will be a collection of data points stored in the variable $points. Each data point has a time – adjusted by the offset value, so it is time as the camera would have seen it. The Suunto dive computer points have a Description (the name of the dive site and water temperature) and depth, while the GPS points have Speed (GPS works in knots and the script calculates Miles per Hour and Kilometres per hour); bearing, latitude as Degrees, Minutes, Seconds, North or South, Longitude as Degrees, Minutes, Seconds East or West, Latitude & Longitude in their original form from the logger and Altitude in both meters and Feet (NMEA data needs extra processing to get the attitude data and Get-NMEAdata has a –NoAltitude switch to allow processing to be speeded up if it only Latitude and Longitude are needed )

Armed with a collection of points the next step find the one nearest to the time the picture was taken; a function named Get-NearestPoint does this. Given the time stamped on the photo the function returns the data point logged closest to that time. It isn’t very sophisticated, taking 3 parameters: a time, a time-sorted array of points and the name of field on the data points to check for the time, and working through the points until the point being looked at is further away from the target time than the previous point; the core of the function looks like this:

   $variance = [math]::Abs(($dataPoints[0].$columnName - $MatchingTime).totalseconds)
   $i = 1
   do {
        $v = [math]::Abs(($dataPoints[$i].$columnName - $MatchingTime).totalseconds)
        if ($v -le $variance) {$i ++ ; $variance = $v }
      } while (($v -eq $variance) -and ($i -lt $datapoints.count))
   $datapoints[($i -1)]

In use it looks something like this.

$image = Get-Image        –Path "MyPicture.Jpg"
$dt    = Get-ExifItem     -image $image  -ExifID $ExifIDDateTimeTaken
$point = Get-nearestPoint –Data  $points -Column "DateTime" -MatchingTime $dt

$point contains the data used to set the EXIF properties of the picture, a process which requires a series of Exif filters to be created – and I explained EXIF Filters in Part 2.  As well as data retrieved from a log, there are times when I want to tag a picture manually. For example  I took some photos in London’s Trafalgar Square without a GPS logger that I want to tag with 51°30’30” N, 0° 7’40” W . To make this easier I created a function named Convert-GPStoEXIFFilter which can be invoked like this:

$filter = Convert-GPStoEXIFFilter 51,30,30 "N" 0,7,40 "W"

If you’re not used to PowerShell I should say that in some places 51,30,30 would be the way to write 3 parameters.  In PowerShell  it is one array parameter with 3 members. (Even old hands at Powershell occasionally get confused and put in a comma which turns two parameters into a single array parameter)  I could have explicitly named the parameters and made it clear that these 3 were an array by writing
 -LatDMS @(51,30,30) -NS "N"

Convert-GPStoEXIFFilter returns a chain of up to 7 EXIF Filters for GPS version, Latitude reference (North or South) Longitude reference (East or West), Altitude reference (above or below Sea Level), the Latitude & Longitude (as degrees, Minutes, Second and Decimals) and Altitude in meters (altitude is optional). If $point holds the data logged at the time the picture was taken Convert-GPStoEXIFFilter can be invoked like this:

$filter = Convert-GPStoEXIFFilter -LatDMS $point.Latdms -NS $point.NS `                    -LONDMS $point.londms -EW $point.ew -AltM $point.altM

At the end of part 2 I showed the Copy-Image command that handles renaming, rotating, and setting keywords & title EXIF fields and mentioned it could be handed a set of filters. All the parameters that Copy-image uses are available to Copy-GPSImage, which takes the the set of points as well . Internally it performs the $image= , $dt= , $point= and $filter= commands seen above before calling Copy-image with the image, the filter chain and the other parameters it was passed. The full set of parameters for Copy-GPSImage is as follows

Image The image to work on – this can be an image object, a file object or a file name can come from the Pipeline.
Points The array of GPS data points from Get-NMEAdata, get-GPXData or Get-CSVGPSData
Keywords Keywords to go into the EXIF Keyword Tags field
Title Text to go into the EXIF Title field
Rotate If specified, adds whatever rotate filter is indicated by the EXIF Orientation field
NoClobber The conventional PowerShell switch to say “Don’t overwrite the file if it already exists”
Destination The FOLDER to which the file should be saved.
Replace

Two values separated by a comma specifying a replacement in the file NAME

ReturnInfo

If specified returns the point(s) matched with the pictures

So now it is possible to use three commands to geotag the images, the first two get time offset , and get the data points, applying that offset in the process.

set-offset "D:\dcim \100Pentx\IMG43272.JPG" –Verbose $points= Get-CSVGPSData 'F:\My Documents\My GPS\Track Log\20100425115503.log' ‑offset $offset

and the third gets the files on a memory card and push them into Copy-GPSImage

$photoPoints = Dir E:\dcim –include *.jpg –recurse |  Copy-GpsImage -Points $Points `            -verbose  -DestPath "C:\users\jamesone\pictures\oxford"   `            -Keywords "Oxfordshire"  -replace "IMG","OX-"  -returnInfo

This is much as it appeared in Part 1 although third command has changed slightly.Copy-GPSImage now has a –returnInfo switch which returns the points where a photo was taken; to link the point to the image file(s) which patched it an extra property Paths is added to the points.

I mention this because I wanted to show the functions I put added almost for fun at the end. Out-MapPoint and ConvertTo-GPX got brief mentions in part 1: with the data in $photopoints I can push camera symbols through to a map like this : (note the sort –unique to remove duplicate points, 79 is the camera symbol)
  $photopoints | sort dateTime -Unique | Out-MapPoint -symbol {79}

Alternatively I can create a GPX file which can be imported into MapPoint, Google Earth and lots of other tools. GPX files need to be UTF8 text, PowerShell wants to write output files as Unicode – thwarting it isn’t hard but is ugly. 
  $photopoints | sort dateTime -Unique | convertto-gpx | out-file photoPoints.gpx -Encoding utf8

With the photo points logged it would be nice to show the path I walked but that will have too many points so I wrote Merge-GPSPoint which combines all the points for each minute so I can do 
  Merge-GPSPoints $points | Out-MapPoint or 
  Merge-GPSPoints $points | convertto-gpx | out-file WalkPoints.gpx -Encoding utf8

One thing I should point out here is that the GPX format which I convert to is a series of Waypoints (i.e places that will be navigated to in future), not track points (places which have been visited in the past). The import routine processes the latter.

The last detail of the module for now is that I also gave it a function to find out where an image was taken, like this

PS  > resolve-imageplace 'C:\users\Jamesone\Pictures\Oxford\OX-43624.JPG' Summertown, Oxford, Oxford, Oxfordshire, England, United Kingdom, Europe, Earth

That’s not a data error when it says Oxford, Oxford. The Geoplaces web service I use returns

ToponymName : name        fcode Desctiption for fcode
Earth Earth AREA a tract of land without homogeneous character or boundaries
Europe Europe CONT

Continent

United Kingdom of Great Britain and Northern Ireland United Kingdom PCLI Independent political entity
England England ADM1 First-order administrative division (US States, England, Scotland etc)
County of Oxfordshire Oxfordshire ADM2 A sub-division of an ADM1 (Counties in the UK)
Oxford District Oxford ADM3 A sub-division of an ADM2 (District level councils in the UK)
Oxford Oxford PPL Populated Place (Cities, Towns, Villages)
Summertown Summertown PPLX Section of populated place

I haven’t done much to introduce intelligence into processing this. I used Trafalagar square in one part 2 and this returns  Charing Cross, London, City of Westminster, Greater London, England, United Kingdom, Europe, Earth which is correct but difficult to allow for. To make matters worse all sorts of strange geo-political questions come up as well if you say UK is the country, and England is the topmost Administrative division: English people might well think counties are the tier below parliament adminstratively but since the Scottish parliament and Welsh assembly opened, you might find a different view if you step over the border. Software which works to the American model of displaying the Populated place and First admin Division – for example Seattle, Washington; is easily thrown giving Reading, Berkshire it gives Reading, England.  Those are questions to look at another time