News, Blogs, and Tips

Coin Flip

Nick Hodges - Sun, 05/30/2010 - 18:28

I had occasion to write a little routine called CoinFlip:

function CoinFlip: Boolean; begin Result := Random > 0.5; end;

I don’t know why I found it mildly amusing.  And I bet someone will tell me that it is slightly biased in one direction.  Because it is.  Anyway, thought you all might enjoy it, too.

Categories: News, Blogs, and Tips

Random Thoughts on the Passing Scene #158

Nick Hodges - Wed, 05/26/2010 - 09:01
Categories: News, Blogs, and Tips

Random Thoughts on the Passing Scene #157

Nick Hodges - Mon, 05/24/2010 - 12:57
Categories: News, Blogs, and Tips

Random Thoughts on the Passing Scene #156

Nick Hodges - Wed, 05/12/2010 - 14:54
  • “The board "highly recommended" switching to Pascal/Delphi because it is stable and was designed to teach programming and problem solving.”
  • Cary Jensen has an interesting survey up on his blog, asking about how much you use the “non-core” features of RAD Studio.  Now this is an interesting question. He’s asking particularly about the unit testing, audit, metrics, and design patterns features on the product.  Now these are some unheralded features, particularly the audits and metrics.  Cary’s a little inaccurate in that while the Together product did go with Borland, the codebase that was Together for Visual Studio and which was eventually adapted into the RAD Studio IDE came with us to Embarcadero and continues to advance and evolve.  In fact, the modeling and the accompanying features like audits, metrics, and the code formatter are all alive and quite well and being improved as we speak.  But in any event, I’ll be curious to see the outcome.  Please do go and fill out the survey, and if you aren’t using those features, please give them a look.  There is a lot of power there.
  • I did a podcast with Jim McKeeth for The Delphi Podcast.  Unfortunately, it stopped recording in the middle, so I guess it will be a two parter. 
  • I’ve been using Evernote more and more these days. Interesting service, and free for up to 40MB a month, which is far more than I’m using now.  I haven’t upgraded to their Premium services yet, but I can see that coming.  It’s really just a cloud app – I can make and view notes on the web and my computer, and then view them on the web, my Android phone, anywhere. 
Categories: News, Blogs, and Tips

Delphi Development Pretty Good Practices #4 – Do Work in Classes

Nick Hodges - Wed, 05/05/2010 - 09:29

The next principle for the “Pretty Good Practices” we’ll discuss is this notion:  Whenever possible and as much as possible, put functionality in a class –  preferably a class that can be easily unit tested, reused, and separated from any user interface.

TextScrubber demonstrates this via the use of the TTextScrubber class in the uTextScrubber.pas unit.  TTextScrubber  is a simple TObject descendant that does all the work for the whole application, really.  It is a standalone class – you could take the uTextScrubber.pas unit and use it in most any project you cared to.  Because of this, it is also very easy to write unit tests for this class.  (We covered unit testing in my previous series “Fun with Testing DateUtils.pas”, but I’ll discuss Unit Testing in a later post in this series as well.)  The class attempts to follow the “Law of Demeter”, which says that classes should know as little as possible about outside entities.  The three principles of the Law of Demeter are as follows:

  • Each class should have only limited or hopefully no knowledge of other classes.
  • If a class must have knowledge of other classes, it should only have connections to classes that know about it as well.
  • Classes should never “reach through” one class to talk to a third class

In the case of TTextScrubber, it only knows about and utilizes the TClipboard class and nothing else.  It doesn’t try to grab things out of TClipboard or attach to or require any other class.  It pretty much minds its own business, utilizes the services of the clipboard, and provide an easy way to get at its functionality.  It endeavors to do one thing:  scrub text, by both straightening and “un-formatting” it.  It has short, sweet method bodies, and ensures that it doesn’t try to do too much beyond exactly what it is supposed to do.  Following the Law of Demeter tends to make your code more maintainable and reusable. By reducing dependencies, you ensure that a class is as flexible as possible and that changes to it don’t tend to have far reaching consequences. 

So, to as large a degree as possible, you should endeavor to put the functionality of your program into classes.  One way to tell you are not doing this is if you tend to do “OnClick” programming, or relying on event handlers to do the work of your application.  The Pretty Good Practices way of programming would dictate that your event handlers would contain code that merely instantiated and used other classes instead of having the actual code in them to do the work of your application. 

So for instance, most of the work in TextScrubber gets done in an OnClick event of the TTrayIcon component.  That code looks like this:

procedure TStraightTextMainForm.MainTrayIconClick(Sender: TObject); begin MainTrayIcon.Animate := True; case TextScrubberOptions.ClickChoice of ccStraightenText: begin DoStraightenText; end; ccScrubClipboard: begin DoPurifyText; end; end; end;

It merely calls one of two functions, DoStraigthenText or DoPurifyText, that scrub the text on the clipboard.  Those two methods look pretty much the same – they merely create a TTextScrubber, use it, and then free it.  DoStraightenText looks like this:

procedure TStraightTextMainForm.DoStraightenText; var TS: TTextScrubber; begin TS := TTextScrubber.Create(TextScrubberOptions.ShouldTrim); try TS.StraightenTextOnClipboard; finally TS.Free; end; end;

This method is very simple and to the point — it utilizes the TTextScrubber class to do the work.  It’s not always entirely possible, but I try to make as many of  my event handlers and methods follow this pattern of merely utilizing the functionality of external classes.  Doing so enables a few things:

  • It means that functionality is much easier to unit test.  Isolated classes with specific functionality make unit testing really easy. 
  • Functionality is easier to share and reuse.  An isolated, decoupled class can easily be moved to new applications as it has few or no dependencies.
  • Lean event handlers mean that your user interface isn’t tightly coupled to the work code.  This means that adjusting or altering your UI is easier to do, and adjusting and altering the work code doesn’t mean a change in the way the UI works.

So, to sum up – always try to build standalone classes to do the work or your application. 

Categories: News, Blogs, and Tips

Random Thoughts on the Passing Scene #155

Nick Hodges - Fri, 04/30/2010 - 10:29
  • Delphi Prism 2011 is coming soon, and you can read more about it at SDTimes.  Or maybe you are more of an InfoWorld guy and want to read their article about it
  • If you are a RemObjects customer, you might be interested in testing their Public Chat system.
  • As part of my “Pretty Good Practices” series, I talked just a bit about naming conventions.  One of the commenters pointed out this classic EDN article by Charlie Calvert that outlines a “Style Guide” for writing code.  I want to stress that I’m a firm believer that there is no one true way to style your code. I have a way I like to do it, but your way is great, too. The important thing is to have a set of rules and stick to them.
  • For everybody who reads our newsgroups via NNTP:  Make sure that your newsgroup reader is pointed to forums.embarcadero.com and not to a *.codegear.com address. 
  • The Delphi for PHP “look aheads” done by Jose continue apace.  There are a lot of cool things coming out of those videos.  Give them a look.
Categories: News, Blogs, and Tips

Delphi Development Pretty Good Practices #3 – File Setup

Nick Hodges - Wed, 04/28/2010 - 08:22

Okay, hopefully by now you’ve read the introduction of this series, and you’ve downloaded (or better yet, pulled from source control) the latest version of TextScrubber and given it a once over.  In this installment, I’m going to discuss the file structure of the project and why I did things the way I did.

First, an admin note:  I’ve made some small changes to the code for TextScrubber.  You can get these by going to the main directory for your code and typing svn up.  They aren’t any big deal – comments, some cleanup and organization, but it sure is easy to get the latest changes, isn’t it.

Taking a look at the file set that makes up the TextScrubber project, you see a number of different files, each with a particular purpose. I’ll describe each file in turn, telling what its purpose is and why it exists.  Future installements will go in depth a bit more.

FileName

Discussion

frmAboutBox.pas Every Application should have an About Box.  The About Box should display the Application Icon, give Version information, copyright notices, and contain a short description of what the application is or does.  frmStraightText.pas This is the “main form” for the application, but since TextScrubber doesn’t have a main for, but it simply a container for some non-visual controls. frmTextScrubberOptions.pas Every application should have a single dialog that allows the user to set all the configurable options and preferences for the application. NixUtils.pas NixUtils.pas is a general purpose file of handy utilities and routines that I’ve built up over the years.  I use it in TextScrubber, so I’ve included it in the project.  Normally, I keep this file in a separate directory. uTextScrubber.pas This is the “workhorse” unit that contains the class that does all the work of the application. uTextScrubberConsts.pas This file has one and only one purpose:  To hold all the constants for the project. It will include all constant declarations, as well as the strings declared with resourcestring. uTextScrubberTypes.pas This file contains all the types declarations for the project, including classes, enumerations, records, etc. uTextScrubberUtils.pas This file contains those little, standalone, “helper” routines that you use to build the product.  Routings that go into this unit are often considered as candidates to end up in NixUtils.pas, but most often, they are very specific to a purpose of the project. A Note About Naming Conventions

For this project, I’ve used a pretty simple naming convention. For filenames, I put ‘frm’ at the front of forms, ‘u’ at the beginning of standalone units. DataModules would get ‘dm’. Constants start with ‘c’, and resourcestrings start with ‘str’.  Parameters are prefixed with ‘a’, and all local variables (well, almost all) are prefaced with ‘Temp’.  Those latter two help keep things straight inside class methods.  I try to make my identifiers descriptive, and I never worry about their length.  A well named identifier makes code clearer, and Code Completion can do all the work if you are worried about typing.  (But in my view, you should never worry about typing if typing less means writing unclear code…..)

I use those as a general set of rules, but I’m not dogmatic about it.  I try to be consistent for my and your benefit.  The exact rules of naming aren’t nearly as important as having a naming convention.  My recommendation is to find a system that you like and stick with it.  Naming conventions are a great source of religious debates. I generally leave that to others, and simply recommend that you find something that works for you and stick with it. 

More Detail to Come

I’ll be talking a bit more specifically about each of the files in future installments.  This entry should just give you a brief rundown on the basics.

Categories: News, Blogs, and Tips

Delphi Development Pretty Good Practices #2 – Source Control

Nick Hodges - Fri, 04/23/2010 - 13:35

Okay, so for this first installment, I’ll be illustrating one of my core principles for developing applications:  All my code of any importance at all goes under source control.  All of it.

I’m all by myself – why should I use source control?

We ask about source control use on the annual survey, and a surprisingly large percentage of you are not using source control at all.    That as a bit of a surprise.  If you are working on a team of any size, using source control is a no brainer.  But even if you are working alone, using source control is a really good idea. 

Why, you might ask?  Well, there are a number of good reasons:

  1. It’s good to be in the habit.  Sure, you may be working alone.  But in the future you may not be.  Or your “weekend hobby project” might turn into a popular project with many developers.  If anything like that happens, being in the habit of using source code control will stand you in good stead.
  2. It protects your code.  Since your code is stored in on a server apart from your development machine, you have a backup. And then, you can even backup the code on the server.  Sure, you can zip it all up any time you want, but you don’t get all the other benefits I’m listing here.
  3. It can save your butt.  Sometimes, you might accidently delete something.  You might make mistakes and change code that you didn’t want changed.  You might start off on some crazy idea when you are feeling a bit saucy, and then regret it.  Source control can save you from all of these by making it a piece of cake to revert to any previous state.  It’s like a really powerful “undo” feature.
  4. It allows you to “turn back time”. Say you are a shareware author. You like to release updates and new versions.  And say you get a support request from a customer that has a bug while using a version that is two major releases old. Source control lets you easily recreate the code base for that exact release and debug the problem that the user is seeing.
  5. It makes you think about your process.  Even if you work alone, you should be deliberate and organized in how you write code. If you are in the habit of checking your code into a source control system, you’ll end up thinking more about what you are doing, how you are doing things, and you’ll end up being more organized and deliberate. 
  6. It gives you the freedom to experiment.  Somewhat the mirror image of the previous reason, source control gives you the freedom to say “What the heck, I’ll try that wacky way of doing things!”  Since you know that you can always get back to a known good state, you can be free to experiment and try something that might otherwise hesitate to do. And that experiment might just prove to be a brilliant way to do it. 
  7. It lets you backtrack.  Even when we work alone, we can’t remember every single thing we do and every single change we make.  And I bet at least once in your life you’ve looked at some code and said “Huh? When the heck did that happen?”  With a source control system, you can answer that question very easily.  You can track where a specific change came from and when it was made and maybe even the comment you made when you checked the change in.
  8. It lets you see what changed.  Sometimes, things start acting up. Maybe a section of your application that you haven’t used in a while is behaving differently than you expected.  Maybe it is totally broken and you have no idea why.  Source control can let you track the process and peer into the history of a specific chunk of code to see what changes were made and how those changes affected the project as a whole.

I’m sure you all can think of more reasons.  Bottom line is that if you aren’t using source control, then you should start, no matter what your development situation is.  ZIP files just aren’t going to cut it.  Seriously.

Okay, I’m convinced. What now?

I’m convinced that my points above are so compelling that you are in violent agreement with me, so I’m going to make you use the Subversion client to get the code for this series’ demo project, TextScrubber. TextScrubber is only available from SourceForge under Subversion.   If you don’t use Subversion for source control, you should at least have the command line client on your machine, because Subversion is everywhere, and you should at least know how to get code from code repositories that use it.  I know that there are other well known source control management systems out there.  Git and Mercurial are growing in popularity, but Subversion is probably the most widely used source control system out there.  We use Subversion internally here on the RAD Studio team.

Getting the Subversion Command Line Client

So the first thing you’ll need to do is to get the Subversion client.  (If you already have the client, you can skip this whole section) A whole lot of projects out there — whether on SourceForge or GoogleCode or CodePlex — use Subversion, so having the client is pretty useful.  It’s also small and easy to use, so here’s how you get it:

  1. I’d recommend getting the binary package from Collabnet:  In order to do that, you’ll have to get an account with them. You can sign up here:  http://www.open.collab.net/servlets/Join  If you don’t want to do that, you can get the binaries here as well:  http://subversion.tigris.org/servlets/ProjectDocumentList?folderID=91
  2. Go to:  http://www.collab.net/downloads/subversion/ and choose the second download entitled:  “CollabNet Subversion Command-Line Client v1.6.9 (for Windows)”
  3. There is a big orange button there that says “Download”.  Note again that this is the second button on that page as we are downloading the client only.
  4. Press the button and download the file. 
  5. Execute the installer that you downloaded. 

There, you just installed the Subversion client.  The path to the client should now be on your DOS PATH (if it isn’t, you can put it there) and you should be all ready to go on the command line. 

Note: Subversion has only very recently been brought under the umbrella of the Apache Project. As such, it is licensed under the very friendly Apache License, Version 2.0.   It was originally founded by Collabnet, but moved over to be a part of the Apache set of tools just this past February.  It is open source, so if you are really hard core, you can download the source and compile it all yourself.  Me?   I’m not that hardcore. I use the convenient binaries provided by the good folks at Collabnet. 

Grabbing the Code for TextScrubber

Once you have the command line Subversion client installed (it’s called svn.exe, by the way), you can easily download the code for TextScrubber.  Seriously, it’s like falling off a log.  Just do the following:

  1. Open a command window and go to the parent directory where you want the code to go.  For instance, if you want the code to go into c:\code\textscrubber, you want to start with your command prompt at c:\code
  2. Issue the following command: 

svn co https://textscrubber.svn.sourceforge.net/svnroot/textscrubber/trunk textscrubber

‘co’ stands for ‘checkout’. The second parameter is the URL for the ‘tip’ of the Subversion repository. This command will pull from SourceForge the most recent version of the code into the \textscrubber\trunk directory. That last parameter there is the name of the subdirectory that will be created and filled with the code from SourceForge. 

That’s it. You now have the most recent, up to date code for TextScrubber on your machine.  You are all ready to start straightening and unformatting text!

Using Subversion

In the future, if I make updates to the project, then all you need to do is to navigate to the \textscrubber\trunk directory and do a “7up” or "svn up" command, and it will get the latest and greatest code for you in a jiffy.  Can’t be much simpler than that. That’s the command you’ll end up using the most, probably.

Since it is so popular and commonly used, there is a lot of information about using Subversion out on the web.  I’ve found this tutorial to be very useful, as well as the book (written by the same guys that wrote the tutorial..): Version Control with Subversion

Another useful tool for using Subversion is TortoiseSVN.  TortoiseSVN is a Windows shell plug-in that provides all the functionality of the Subversion client in an easy to use GUI.  Below is a screen shot of the TortoiseSVN Log Window for the TSmiley project on SourceForge.

TortoiseSVN is also an open source project, so you can get it for free.  If TortoiseSVN proves to be valuable to you, I’d also encourage you make a generous donation to the project.  I did.

Getting and Setting Up the Subversion Server

If you make the wise and sagely decision to go ahead and use source control for all of your code, you can easily set up the Subversion server.  You can, of course, download the Collabnet binaries and install them, but if you want to go the pathetically easy route (which I recommend), you should download and install Visual SVN Server.  I’m not even going to go through the steps to installing the server, because they are not much more than download, run the install, and you are done.  That’s it.  It takes like two minutes.  Seriously. They even provide you with a nice Management Console application for administering the server, making that really easy as well. 

(And while you are looking at VisualSVN Server, you can consider upgrading to their Enterprise edition.) 

Now, I myself run the full server (Apache, etc. – the whole ball ‘o’ wax….) on my local machine and use it to manage my code.  This enables me to browse my code in a browser and do other things that are supported by a full blown server   But that might be a bit of overkill for some of you (I’m weird that way – I just want the “full” experience and like to be in learning mode…) and you might not want to do that.  Instead, you can simply create local repositories on your local disk.   The Subversion tutorial above can show you how to do that. 

I have been using the server locally since I use a laptop exclusively and move it between work and home.  But I think I’ll set up the server on a machine at home and check in there. That way, my code will be stored on a separate machine.

More About Source Control

As you can probably tell, this post isn’t meant to be a primer on Subversion or source control in general.  The tutorial I listed above is a good place to learn about Subversion.  If you want to learn more about source control in general, I recommend reading Source Control HOWTO by Erik Sink.  Eric is a really interesting guy who has a great blog and also runs SourceGear, a commercial vendor of developer tools including Vault, a source control management tool that I recommend considering.  You might be interested to note that Vault is free for single users.

That’s it.

And so that is the first “Pretty Good Practice” for developing with Delphi: Manage your code with a source control system.

So, what do you think?  Are you ready to make the move to source control management?

Categories: News, Blogs, and Tips

Delphi Development Pretty Good Practices #1

Nick Hodges - Mon, 04/19/2010 - 15:38

A while back someone (I can’t remember who, sadly, sorry to the person who made the suggestion…) suggested that someone do a series of articles about “the best practices on how to develop a Delphi application”.  That’s a good idea.  There are a lot of ways to do things, and clearly there are some good way and some bad ways, some okay ways and some really cool ways to develop with Delphi.

This series of articles will cover my personal ideas behind a good way to organize and build a Delphi application.  I’ll build a small application that does a few things.  I’ll organize the application in such a way that it illustrates some techniques that I believe are a good way to organize things.  I will explain my reasoning and in doing so I hope that you guys learn something.  This won’t be an exhaustive list or collection of ideas.

What I will not do is pretend that what I do and the way I do it is the only way to do things.  Heck, it may be that you end up thinking the way I do things is ridiculous.  I’ll let you all be the judge of that.   If you guys have different ideas, please feel free to express them.  If you like what you see, let me know.  I’ve been using Delphi for a long time, and so I think I’ve learned a thing or two along the way, but I’m not the world’s greatest Delphi developer and I certainly won’t claim to be the final arbiter of how things should be done.  I generally don’t like the term “Best Practices”, so I’ve titled this using “Pretty Good Practices”.

My goal here, really, is to be informative and stimulative and to evoke conversation and discussion.  I have developed a way of doing things over the years and I’ll be outlining them in this series of articles.  I don’t believe that this series will be all inclusive – that is, I won’t be covering every single aspect of every single development practice that I’ve ever thought of. Again, I hope merely to stimulate some discussion and do my humble part to  improve the way Delphi applications are developed.

The application I’ll use as an illustration is one that I use practically every day.  I call it “TextScrubber”, and it is a simple application that resides in the tray and let’s me “clean up text”, either by removing formatting, removing line breaks, or both.  I use it when I copy text from the web or from an MS Word document that contains formatting information that I in turn want to paste into another document when I don’t want the formatting to come along.  I copy the text, click on the icon in the tray, and the text is “scrubbed” and put back on the clipboard.  (It is basically a much simpler version of PureText – a product that I heartily recommend that you use if you want this kind of functionality.)  TextScrubber is actually a very simple application with not a lot of code, but I’ve laid it out in such a way that it illustrates many of the methods that I have developed over the years.  Thus, I hope it will suffice to illustrate the things I want to discuss.

To start, I’ll talk about a few general ideas that drive how I do things. These are the general ideas that drive what I do when I develop:

  • Any code of consequence at all should be managed under source control, even if I am the only one using and writing the code.  I use Subversion for this. I actually run the Subversion server on my local machine, but you don’t need to do that.  You can manage a Subversion repository merely as a set of local files, if you want.  But even if you develop all alone, I think using source control is a great idea.
  • I want everything I write to be easily expandable and scalable.  I try to organize and write code that can easily be extended and enhanced.  I try to write classes that have a sensible hierarchy and that can be easily inherited from.
  • I want my code to be testable.  I want to write it in such a way that it is conducive to writing unit tests.  I want it to be easy to isolate and easy to find problems if they crop up. 
  • To as large a degree as possible, I separate the logic of an application from the user interface.  I don’t go full bore into an MVC application usually – that almost requires a special framework, but I do try very hard to separate out functionality into separate, testable classes.
  • I want my code to be well modularized.  That is, I want each class and each unit to have a single purpose, and be linked to other units and classes as loosely as possible. 

What would you add to this list?  What are some of the general rules you try to follow when developing an application?

Categories: News, Blogs, and Tips

Cool Stuff with Delphi #29

Nick Hodges - Fri, 04/16/2010 - 10:48

SamContacts is a “Simple Address Manager” that enables you to quickly and easily manage your contact information.

You can read more, or take a quick tour on their website

Categories: News, Blogs, and Tips

Random Thoughts on the Passing Scene #154

Nick Hodges - Fri, 04/16/2010 - 10:31
Categories: News, Blogs, and Tips

Fun With Testing DateUtils.pas #8

Nick Hodges - Thu, 04/15/2010 - 14:34

Okay, first up:  I’ve put the tests and updates for DateUtils.pas on CodeCentral

So far, I’ve been writing tests in a pretty organized way.  But I write each individual test, one at a time.  I often end up writing a lot of “piece-meal” tests by hand.  I end up with a lot of code that looks like this:

procedure TDateUtilsTests.Test_IncMinuteBeforeEpochAdding; var TestDate, Expected, TestResult: TDateTime; begin TestDate := EncodeDateTime(945, 12, 7, 13, 34, 26, 765); TestResult := IncMinute(TestDate, 1); Expected := EncodeDateTime(945, 12, 7, 13, 35, 26, 765); CheckTrue(SameDateTime(TestResult, Expected), Format('IncMinute couldn''t add a single minute to %s ', [DateTimeToStr(TestDate)])); TestResult := IncMinute(TestDate, 45); Expected := EncodeDateTime(945, 12, 7, 14, 19, 26, 765); CheckTrue(SameDateTime(TestResult, Expected), Format('IncMinute couldn''t add 45 minutes to %s ', [DateTimeToStr(TestDate)])); TestResult := IncMinute(TestDate, MinsPerDay); Expected := EncodeDateTime(945, 12, 8, 13, 34, 26, 765); CheckTrue(SameDateTime(TestResult, Expected), Format('IncMinute couldn''t add a days worth of minutes to %s ', [DateTimeToStr(TestDate)])); end;

Now those tests are fine, but they aren’t very easy to write. Adding another one either takes a lot of typing, or runs the risk of cut-n-paste errors that slow things down.  Wouldn’t it be cool if there were a more systematic way of running tests that made it easier to add a specific test?

Well, one of our R&D guys did that for the DateUtils.pas unit.  Denis Totoliciu works on the RTL as part of our Romanian team.   Now Denis is a pretty smart guy and a big proponent of test driven development.  He has been busy writing tests for DateUtils.pas as well, and he’s a lot more efficiency minded than I am.  As a result, he’s also a lot more productive and prolific. This is why I am the manager and he is the developer. 

If you look at the code for the unit tests, you can probably see where he has written tests and where I have (though I’ll be taking up Denis’s method for future tests).  He recognized that most tests for any given DateUtils.pas routine are all going to be pretty similar, so he created a system whereby you create a large array of data, and then iterate over that array and run the tests on the data in each element. This way, if you want to add tests, you can simply add an item to the data array with the input and expected output. 

For instance, when he first did this, I noticed that he didn’t always add data to test dates before the epoch.  Since I’ve learned the hard way that whenever you test a routine you should also test dates before the epoch as well as after, it was really easy for me to simply add the data to the array and expand the number of tests that were run. 

Here’s how this looks:

procedure TDateUtilsTests.Test_DayOfTheMonth; const CMax = 16; type TDateRec = record Year, Month, Day: Word; ExpectedDay: Word; end; TDates = array [1..CMax] of TDateRec; const CDates: TDates = ( (Year: 2004; Month: 01; Day: 01; ExpectedDay: 01), // 1 (Year: 2004; Month: 01; Day: 05; ExpectedDay: 05), (Year: 2004; Month: 01; Day: 08; ExpectedDay: 08), // 3 (Year: 2004; Month: 02; Day: 14; ExpectedDay: 14), (Year: 2004; Month: 02; Day: 29; ExpectedDay: 29), // 5 (Year: 2004; Month: 04; Day: 24; ExpectedDay: 24), (Year: 2004; Month: 07; Day: 27; ExpectedDay: 27), // 7 (Year: 2004; Month: 12; Day: 29; ExpectedDay: 29), (Year: 2005; Month: 01; Day: 01; ExpectedDay: 01), // 9 (Year: 2005; Month: 01; Day: 03; ExpectedDay: 03), (Year: 2005; Month: 05; Day: 05; ExpectedDay: 05), // 11 (Year: 2005; Month: 07; Day: 12; ExpectedDay: 12), (Year: 2005; Month: 09; Day: 11; ExpectedDay: 11), // 13 (Year: 2005; Month: 02; Day: 21; ExpectedDay: 21), (Year: 2005; Month: 02; Day: 25; ExpectedDay: 25), // 15 (Year: 2005; Month: 04; Day: 10; ExpectedDay: 10), ); var TestDate: TDateTime; Expected: Word; Result: Word; i: Integer; begin for i := Low(CDates) to High(CDates) do begin TestDate := EncodeDate(CDates[i].Year, CDates[i].Month, CDates[i].Day); Expected := CDates[i].ExpectedDay; Result := DayOfTheMonth(TestDate); CheckTrue(SameDate(Expected, Result), Format('DayOfTheMonth failed for test %d.', [i])); end; end;

Note that the first thing this code does is declare a really big array full of dates (all after the epoch, too). The array is of type TDates, which is simply an array of type TDateRec. All of these types are declared locally, so each routine can have it’s own separated data types. The array holds all the inputs as well as the expected result. Each element of the array is informally numbered, and when one fails, the number of the test is identified by the counter used in the for statement.

Now for me to add a bunch of tests that use data before the epoch is a piece of cake. I merely change the value for CMax and then make the array look like this:

const CDates: TDates = ( (Year: 2004; Month: 01; Day: 01; ExpectedDay: 01), // 1 (Year: 2004; Month: 01; Day: 05; ExpectedDay: 05), (Year: 2004; Month: 01; Day: 08; ExpectedDay: 08), // 3 (Year: 2004; Month: 02; Day: 14; ExpectedDay: 14), (Year: 2004; Month: 02; Day: 29; ExpectedDay: 29), // 5 (Year: 2004; Month: 04; Day: 24; ExpectedDay: 24), (Year: 2004; Month: 07; Day: 27; ExpectedDay: 27), // 7 (Year: 2004; Month: 12; Day: 29; ExpectedDay: 29), (Year: 2005; Month: 01; Day: 01; ExpectedDay: 01), // 9 (Year: 2005; Month: 01; Day: 03; ExpectedDay: 03), (Year: 2005; Month: 05; Day: 05; ExpectedDay: 05), // 11 (Year: 2005; Month: 07; Day: 12; ExpectedDay: 12), (Year: 2005; Month: 09; Day: 11; ExpectedDay: 11), // 13 (Year: 2005; Month: 02; Day: 21; ExpectedDay: 21), (Year: 2005; Month: 02; Day: 25; ExpectedDay: 25), // 15 (Year: 2005; Month: 04; Day: 10; ExpectedDay: 10), // Before the Epoch (Year: 1004; Month: 01; Day: 01; ExpectedDay: 01), // 17 (Year: 1004; Month: 01; Day: 05; ExpectedDay: 05), (Year: 1004; Month: 01; Day: 08; ExpectedDay: 08), // 19 (Year: 1004; Month: 02; Day: 14; ExpectedDay: 14), (Year: 1004; Month: 02; Day: 29; ExpectedDay: 29), // 21 (Year: 1004; Month: 04; Day: 24; ExpectedDay: 24), (Year: 1004; Month: 07; Day: 27; ExpectedDay: 27), // 23 (Year: 1004; Month: 12; Day: 29; ExpectedDay: 29), (Year: 1005; Month: 01; Day: 01; ExpectedDay: 01), // 25 (Year: 1005; Month: 01; Day: 03; ExpectedDay: 03), (Year: 1005; Month: 05; Day: 05; ExpectedDay: 05), // 27 (Year: 1005; Month: 07; Day: 12; ExpectedDay: 12), (Year: 1005; Month: 09; Day: 11; ExpectedDay: 11), // 29 (Year: 1005; Month: 02; Day: 21; ExpectedDay: 21), (Year: 1005; Month: 02; Day: 25; ExpectedDay: 25), // 31 (Year: 1005; Month: 04; Day: 10; ExpectedDay: 10) );

And now I have a complete set of tests for dates before the epoch.  Piece of cake.  If I find specific dates that I want to test, then I can easily add those as well.  The actual code that runs the tests doesn’t care how many elements there are in the array; it will happily process all the data passed to it.  Now, you don’t get to write as many fun error messages as you do when you do it the more straightforward way, but that is a small price to pay for the efficiency gained.

And if a test fails, you are given the test number, and you can see the test data and expected result right away right in the code.

And here’s a further aside for those of you who don’t like me using random dates:  this test for EncodeDateDay and DecodeDateDay tests every single day in every single year. 

procedure TDateUtilsTests.Test_EncodeDateDay_DecodeDateDay; var TempYear, MinYear, MaxYear: Word; RYear: Word; TestDate: TDateTime; RDayOfWeek: Word; TempDay: Integer; RDayOfYear: Word; begin // Test every possible day for every possible year. MinYear := MinAllowableYear; // 1 MaxYear := MaxAllowableYear; // 9999 for TempYear := MinYear to MaxYear do begin for TempDay := 1 to DaysPerYear[IsLeapYear(TempYear)] do begin TestDate := EncodeDateDay(TempYear, TempDay); DecodeDateDay(TestDate, RYear, RDayOfYear); CheckEquals(TempYear, RYear, Format('EncodeDateDay() / DecodeDateDay() failed. TempYear = %d; RYear = %d', [TempYear, RYear])); CheckEquals(TempDay, RDayOfYear, Format('EncodeDateDay() / DecodeDateDay() failed. TempDay = %d; RDayOfYear = %d', [TempDay, RDayOfYear])); end; end; end;

It takes a little longer to process, but it covers every single possible test case, I believe.

So that should cover it.  I think I’ll wrap the series up here.  I’m almost done writing tests for DateUtils.pas.  When I am done, I think I will move on to StrUtils.pas. 

Categories: News, Blogs, and Tips

Fun With Testing DateUtils.pas #7

Nick Hodges - Wed, 04/14/2010 - 08:03

Okay, so after that last post, and after all the fixes that I did, things have settled down a little bit.  I thought I’d take advantage of this interlude to tell you some interesting things about TDateTime in Delphi, because along the way here, I have discovered a thing or two along that way that was surprising.

The first thing that you might be interested in is that the calendar used by the Delphi TDateTime has a specific name:  The Proleptic Gregorian Calendar.  Calendars, of course, have been notoriously inaccurate over the years, and even ours isn’t entirely accurate in that we have to have leap years every so often (and not as often as one might believe…). We even have these “leap seconds” every once and a while, though the notion of being able to measure things down that precisely is kind of weird.  Starting with the Romans – Julius Caesar, actually – the Western world used the Julian calendar.  And that worked pretty well, actually.  The Julian calendar worked pretty well – it has 365 days and a leap year every four years – but it wasn’t entirely accurate, and (as you can read in the Wikipedia entry) politics got involved, and the calendar got out of whack pretty easily and pretty often.

So anyway, as you may have noticed, some of the tests that I have written include some pretty big time increments – like 1000 years worth of seconds and things like that.  And I also wanted to makes sure that the incrementing routines worked across the epoch of December 30, 1899.  So I had to be able to do some pretty serious calculations.  I found a pretty good website to do those calculations called timeanddate.com.  This site has a bunch of calculators for measuring between dates and time and for calculating a date based on distance from another date.  So I used it to figure out what the date was if you decremented two hundred years worth of seconds (that’s 6,307,200,000   seconds for you math geeks….) from, say, Noon on June 30, 1945.  (It’s not exactly noon on  June 30, 1745 due to leap days.)  Well, I would calculate it, and then write the tests, but they would fail because my expected result was always eleven days different than the test result.  Eleven days – how weird, huh?

Well, here’s the deal.  Somewhere along the way, the really smart guys who figure out this kind of thing came up with a new calendar – the Gregorian calendar.  It’s different from the Julian calendar, and starting in the 16th century, the world gradually converted over to use the Gregorian Calendar instead of the Julian calendar (A good chunk of Europe started in 1582, and the last folks to make the switch were the Russians who didn’t change until 1918).  But to do that, you usually had to skip about 10 or 11 days.  Great Britain and all of its possessions (including the colonies that would become the United States) made the switch in 1752.  Therefore, in the English world, the day following September 2, 1752 was September 14, 1752.  There was no September 3 – 13, 1752.  Just didn’t exist.  Once I discovered that, it explained the missing eleven days.

But what does this mean for our trusty TDateTime?  For a minute there I was afraid that I was going to have to do all these special calculations to account for this unusual anomaly, but then I came to my senses and realized:  That can’t be right.  And I was right.  Instead, Delphi uses, as I mentioned above, the Proleptic Gregorian Calendar – that is, it assumes that the Gregorian calendar is in force all the way  back to January 1, 0001.  So for TDateTime, there is a September 4, 1752 (Noon on that day is the value: -53807.5) and every single date “like normal” all the way down to Year 1.  This makes sense, because trying to devise a calendaring system that keeps track of all the vagaries of the Julian calendar system would be basically impossible.  Instead, Delphi uses a system that “makes sense” for a computer.  A number of other languages and tools use the Proleptic Gregorian Calendar, including MySQL and PHP.

That was probably more than you wanted to know about TDateTime, but it’s all stuff that you have to know to write a complete test suite for DateUtils.pas. So far, that summarizes the issues that I’ve run across in testing the unit. I have a ways to go to have a complete test suite, but if I run across more issues, I’ll post on them.

The next post I do will be about a testing scheme that one of our developers, Alexander Ciobanu, devised to make writing tests for testing date functions a little easier.

Categories: News, Blogs, and Tips

Fun With Testing DateUtils.pas #6

Nick Hodges - Tue, 04/13/2010 - 15:55

Okay, so when we last left off, IncMillisecond was still failing in certain circumstances.  Let’s take a look at that.  Note, too, that I have this crazy notion that if you have a function called IncMillisecond, then it should be able to, you know, increment a millisecond.

Here is the IncMilliseconds that you very likely have on your computer:

function IncMilliSecond(const AValue: TDateTime; const ANumberOfMilliSeconds: Int64): TDateTime; begin if AValue > 0 then Result := ((AValue * MSecsPerDay) + ANumberOfMilliSeconds) / MSecsPerDay else Result := ((AValue * MSecsPerDay) - ANumberOfMilliSeconds) / MSecsPerDay; end;

Now that probably works just fine for you — as long as you don’t have a date that has a value less than the epoch. Below the epoch, and particularly in that magic "48 Hours" area right around the epoch itself, things go horribly awry. As we saw last time, this test will fail:

TestDate := 0.0; TestResult := IncMillisecond(TestDate, -1); Expected := EncodeDateTime(1899, 12, 29, 23, 59, 59, 999); CheckTrue(SameDateTime(Expected, TestResult), 'IncMillisecond failed to subtract 1ms across the epoch');

It fails because of a number of reasons actually. The first is precision. The current implementation of IncMillisecond does division using a very small number in the denominator.  In the case of this test the numerator is a really big number multiplied by a really small number.  All of this cries out “precision error!”. (You should thank me – I almost used the <blink> tag there.  Phew!)  And that is basically what happens.  IncMillisecond isn’t precise enough to “see” the difference.

Plus, if you do things around the value of zero, it gets really weird.  For instance, check out the output of this console application:

program IncMillisecondTest; {$APPTYPE CONSOLE} uses SysUtils, DateUtils; var TestDate: TDateTime; TestResult: TDateTime; DateStr: string; begin TestDate := 0.0; TestResult := IncMilliSecond(TestDate, 1001); DateStr := FormatDateTime('dd mmmm, yyyy hh:mm:ss:zzz', TestResult); WriteLn(DateStr); TestResult := IncMilliSecond(TestDate, -1001); DateStr := FormatDateTime('dd mmmm, yyyy hh:mm:ss:zzz', TestResult); WriteLn(DateStr); ReadLn; end.

I think it is safe to say that something is amiss.

So finally, it is time to rework IncMillisecond, because this pesky little routine is actually at the heart of a bunch of issues with DateUtils.pas. As it will turn out, if you call any of the IncXXXX routines, it all ends up as a call to IncMilliseconds, so this needs to be right.

Okay, so I started out writing this really cool implementation that checked for before and after the epoch, and divided large increments into years and months and days to make sure that their was no loss of precision.  I spent a lot of time on it, and had  whole bunch of tests written and passing with it.   But then it suddenly occurs to me that the trusty TTimeStamp data type and its accompanying conversion routines can once again come to the rescue:

function IncMilliSecond(const AValue: TDateTime; const ANumberOfMilliSeconds: Int64 = 1): TDateTime; var TS: TTimeStamp; TempTime: Comp; begin TS := DateTimeToTimeStamp(AValue); TempTime := TimeStampToMSecs(TS); TempTime := TempTime + ANumberOfMilliSeconds; TS := MSecsToTimeStamp(TempTime); Result := TimeStampToDateTime(TS); end;

And here is the cool thing:  I was able to change from my sweet but overly complicated version to the new version above without worrying too much about it, because when I made the switch – all of the tests that I had written for my original version still passed.  This was so cool – I could make the change with confidence because of the large set of tests that I had that exercised all aspects on IncMillisecond.

Anywhow….  Again, the TTimeStamp type is precise, and easy. No need to do direct arithmetic on the TDateTime itself. Instead, we can deal with integers and get the exact answer every time no matter how many milliseconds you pass in. You can pass in 5000 years worth of milliseconds, and all will be well. For instance, this test passes just fine.

TestDate := EncodeDate(2010, 4, 8); MSecsToAdd := Int64(5000) * DaysPerYear[False] * HoursPerDay * MinsPerHour * SecsPerMin * MSecsPerSec; // 1.5768E14 or 157680000000000 TestResult := IncMilliSecond(TestDate, MSecsToAdd); Expected := EncodeDate(7010, 4, 8); ExtraLeapDays := LeapDaysBetweenDates(TestDate, Expected); Expected := IncDay(Expected, -ExtraLeapDays); CheckTrue(SameDate(Expected, TestResult), 'IncMillisecond failed to add 5000 years worth of milliseconds.');

And for you curious folks, here the implementation for the helper function LeapDaysBetweenDates:

function TDateUtilsTests.LeapDaysBetweenDates(aStartDate, aEndDate: TDateTime): Word; var TempYear: Integer; begin if aStartDate > aEndDate then raise Exception.Create('StartDate must be before EndDate.'); Result := 0; for TempYear := YearOf(aStartDate) to YearOf(aEndDate) do begin if IsLeapYear(TempYear) then Inc(Result); end; if IsInLeapYear(aStartDate) and (aStartDate > EncodeDate(YearOf(aStartDate), 2, 29)) then Dec(Result); if IsInLeapYear(aEndDate) and (aEndDate < EncodeDate(YearOf(aEndDate), 2, 29)) then Dec(Result); end;

From there, the rest of the IncXXXXX routines are simple –- they merely multiply by the next “level up” of time intervals, and call the previous one.  I’ve marked them all inline so that it all happens in one need function call.  Thus, we have:

function IncHour(const AValue: TDateTime; const ANumberOfHours: Int64 = 1): TDateTime; begin Result := IncMinute(AValue, ANumberOfHours * MinsPerHour); end; function IncMinute(const AValue: TDateTime; const ANumberOfMinutes: Int64 = 1): TDateTime; begin Result := IncSecond(AValue, ANumberOfMinutes * MinsPerHour); end; function IncSecond(const AValue: TDateTime; const ANumberOfSeconds: Int64 = 1): TDateTime; begin Result := IncMilliSecond(Avalue, ANumberOfSeconds * MSecsPerSec); end;

One thing to note: DateUtils.pas will only handle years from 1 to 9999. TDateTime won’t handle any date less than midnight on January 1, 0001 nor a date larger than December 31, 9999. So if you are using Delphi to track specific dates in dates before that (or if you plan on doing some time travel into the far future) you’ll have to use some other data type to keep track of dates.

Now, once you’ve done the above, it is tempting to say “Hey, for IncDay, I’ll just add the days to the value passed in.  I mean, that’s all you are really doing.  Well guess what!  You can’t do that!  If you have this for your IncDay:

function IncDay(const AValue: TDateTime; const ANumberOfDays: Integer = 1): TDateTime; begin Result := AValue + ANumberOfDays; end;

Then this test will not pass because of the strange “48 hour” deal we talked about last post:

TestDate := EncodeDateTime(1899, 12, 30, 1, 43, 28, 400); TestResult := IncDay(TestDate, -1); Expected := EncodeDateTime(1899, 12, 29, 1, 43, 28, 400); CheckTrue(SameDate(Expected, TestResult), 'IncDay failed to decrement one day from the epoch');

Instead, you have to send it all the way back to milliseconds via IncHour, IncMinute, and IncSecond:

function IncDay(const AValue: TDateTime; const ANumberOfDays: Integer = 1): TDateTime; begin Result := IncHour(AValue, ANumberOfDays * HoursPerDay); end;

Once you put those changes in, well, things get a lot greener.  I have now written a very thorough set of unit test  for testing all of the IncXXXX routines, adding and subtracting dates for both before and after the epoch.  I also test very carefully incrementing and decrementing across the epoch and inside that crazy little 48 hour spot.  They are all passing.

I’ll create a unit with these new fixes in it that you can use if you want.  I’ll also publish the unit that includes these tests that I’ve written.  (When you look at it, be nice.  It’s not very pretty, but it gets the job done.)  As I continue through, I’ll update that file with any other fixes and changes that get made.

Categories: News, Blogs, and Tips

Fun With Testing DateUtils.pas #5

Nick Hodges - Wed, 04/07/2010 - 10:02

Okay, so when I left you hanging in the last post, I promised I’d explain what was up with IncMillisecond.  But before I do that, I have to explain a bunch of stuff about TDateTime. And as it turns out, we’ll have to take a detour, and we won’t exactly get to IncMillisecond this time around. 

Most of you probably know how TDateTime works.  TDateTime is a Double that keeps track of minutes in the “front” of the decimal and seconds in the fraction, or the “back”.  The key thing to know is the value of the “epoch” that I mentioned previously.  For TDateTime, the epoch is 0.0, which corresponds to exactly 00:00:00.000 (midnight) on December 30, 1899. (For can read up on all the gory details about why it is December 30, 1899, and not December 31, 1899

What this means is that a date time of 2.0 is January 1, 1900 at midnight.  2.5 would be noon on January 1, 1900.  1000.25 would be one thousand days and six hours past December 30, 1899, or September 26, 1903 at 6:00:00 AM.  It also means that –1 is December 29, 1899.  and –1000.25 is Sunday, April 4, 1898 at 6:00:00 AM. 

Now, that last one was a bit tricky if you look carefully at it.  The days part was negative (-1000) but the hours part was not.  Remember, the left part of the double is the number of days before the epoch, but the decimal part – the part to the right, if you will – is always a positive value starting at midnight of the day in question.   I emphasized that last part pretty strongly because once a date goes negative, a counter intuitive thing happens.  The negative part only really applies to the left portion of the value.  The decimal value represents a positive value from midnight.  So to do the last calculation above, I actually had to subtract 999 days and 18 hours to get the right answer.  And there in lies the heart of the problem that we have run into with incrementing milliseconds (and seconds and minutes and hours, as it turns out) for days before the epoch. 

Here’s another way to think about it:  what is the date time value for –0.5?  Well, the correct answer is noon on 29 December 1899.  But look at the left part of the value – it is still zero, which is, of course, 30 December 1899!  And what if you make the call Frac(-0.5) to that value?  You get – ready for it? — -0.5!  And I just got done telling you that you can’t have a “negative” time value.  Time values always are positive values from midnight.  And herein lies our problem. 

Another interesting note:  In the particular world of TDateTime, 0 has an unusual “feature”.  When viewed as the “left” side of a TDateTime, it actually represents a span of time just a hair less than 48 hours.  According to the pure mathematical formula for managing dates and times in Delphi, December 30, 1899 actually has 48 hours.  That is, it stretches from –0.999… to 0.999…. in time.  This is weird, huh?  Never really thought about that, did you?  Well, the whole Date/Time system has to account for this little anomaly. 

So, we have two related issues here:  Time values for negative TDateTime values are really positive, and this weird 48 hour day thing right at the epoch.  Well, frankly I didn’t think about or know about either one when I started out writing my unit tests (until they revealed this issue to me.  Unit testing rocks…) and I am very sad to say that the original author of DateUtils.pas didn’t either.  Both of these errors manifest themselves when calculating times at and before the epoch.  That’s the bad part.    And I know all of this because of unit testing.  That’s the good part. 

But wait, there is more.  As it turns out, all of the time calculations in DateUtils.pas are based on floating point values.  Very, very small floating point values, in fact.  For instance, take a look at the current implementation of IncMillisecond:

function IncMilliSecond(const AValue: TDateTime; const ANumberOfMilliSeconds: Int64): TDateTime; begin if AValue > 0 then Result := ((AValue * MSecsPerDay) + ANumberOfMilliSeconds) / MSecsPerDay else Result := ((AValue * MSecsPerDay) - ANumberOfMilliSeconds) / MSecsPerDay; end;

The value for MSecsPerDay is pretty large — 86,400,000 – and when you start dividing small numbers by really big numbers you get even smaller numbers –numbers so small that they lose precision.  Now, you can see that our developer at least recognized that something  was going a little goofy with the dates before zero, but the current implementation has the error we are currently looking at.  Alas.

Or even better, go to SysUtils.pas and take a look at TryEncodeTime, which really does some arithmetic fraught with the possibilities for errors and inaccuracies:

function TryEncodeTime(Hour, Min, Sec, MSec: Word; out Time: TDateTime): Boolean; begin Result := False; if (Hour < HoursPerDay) and (Min < MinsPerHour) and (Sec < SecsPerMin) and (MSec < MSecsPerSec) then begin Time := (Hour * (MinsPerHour * SecsPerMin * MSecsPerSec) + Min * (SecsPerMin * MSecsPerSec) + Sec * MSecsPerSec + MSec) / MSecsPerDay; Result := True; end; end;

That will create some seriously small values, won’t it, given data near midnight on either side?  I’ve subsequently reworked this routine to be more precise.  (I’ll post all this new code for your real soon now.)

Okay, so where to turn in all of this?  The first thing I did was to rewrite IncMilliseconds.  But as you’ll see, even this was really, really tricky and fraught with peril as well.

Okay, so I thought – I’m doing all this test driven development; what I need to do right now is to write some test cases that I know should pass before I even start.  First, I thought that if you have a function called IncMillisecond, then it ought to at least have enough accuracy and precision to at the very least create a different date/time combination, right?

TestDate := 0.0; TestResult := IncMillisecond(TestDate); CheckFalse(SameDateTime(TestDate, TestResult), 'IncMilliseocnd failed to change the given date');

And of course, this fails.  Good – I expected it to. But after a few hours of writing code, and wondering why it keeps failing, I suddenly realize that SameDateTime is the problem here!  Argh!

And then it hits me – Uh oh.  I’ve started pulling on a thread, and if I keep pulling on it, it is going to keep unraveling and unraveling….  And that is exactly what happened.

Checkout your SameDateTime:

function SameDateTime(const A, B: TDateTime): Boolean; begin Result := Abs(A - B) < OneMillisecond; end;

Now, that looks all well and good. Take the absolute value of the difference, and as long as it is less than 1ms, then the times are effectively the same. OneMillisecond is defined as: OneMillisecond = 1 / MSecsPerDay, or 1.15740741 × 10-8. And in the world of computers, that is a pretty small number. So small, in fact, that it is pretty easy to have small values not register. In our simple test here, the A value is 0, and the B value -1.1574074074e-08. And guess what, that difference is not quite enough to get SameDateTime to return False. It returns True instead.

So, let’s follow this loose thread a bit more, and then we’ll quit for today. We need a SameDateTime function (and, as it turns out, a SameTime function) that returns a correct answer for dates that actually are OneMillisecond apart. We need something that gives answers based on real number so of milliseconds.  And SysUtils.pas has the answer:  TTimeStamp

TTimeStamp is declared as follows:

{ Date and time record } TTimeStamp = record Time: Integer; { Number of milliseconds since midnight } Date: Integer; { One plus number of days since 1/1/0001 } end;

Now, that is more like it — integers and not these fuzzy floating point numbers! The accompanying DateTimeToTimeStamp function is exactly what we need. Now, we can write a very precise SameDateTime and SameDate functions:

function SameDateTime(const A, B: TDateTime): Boolean; var TSA, TSB: TTimeStamp; begin TSA := DateTimeToTimeStamp(A); TSB := DateTimeToTimeStamp(B); Result := (TSA.Date = TSB.Date) and (TSA.Time = TSB.Time); end; function SameTime(const A, B: TDateTime): Boolean; begin Result := (DateTimeToTimeStamp(A).Time = DateTimeToTimeStamp(B).Time); end;

Those two new implementations will, in fact, return correct results for two dates one millisecond apart.  And let’s just say that TTimeStamp is going to be making more appearances in the new, updated DateUtils.pas in the future.

Okay, so our original, simple test above passes now. But guess what: this second one still doesn’t:

TestDate := 0.0; TestResult := IncMillisecond(TestDate, -1); CheckFalse(SameDateTime(TestDate, TestResult), 'IncMilliseocnd failed to change the given date'); Expected := EncodeDateTime(1899, 12, 29, 23, 59, 59, 999); CheckTrue(SameDateTime(Expected, TestResult), 'IncMillisecond failed to subtract 1ms across the epoch');

So next time, we’ll get cracking on that.

Categories: News, Blogs, and Tips

Fun With Testing DateUtils.pas #4

Nick Hodges - Thu, 04/01/2010 - 14:39

First, an admin note:  I’ve adjusted the color of strings in my code.   I was optimizing the colors for reading on my blog proper as opposed to the main site (hadn’t even thought of it, actually, sorry.), and someone pointed out that the colors weren’t working on the main site at all.  Hope that this post is better.  I changed the last post from Yellow to Lime.  If you have a better color suggestion, please let me know.  I’ve also endeavored to wrap those long code lines. The code won’t compile as shown, but I trust that you guys can figure it out……

Okay back to the topic at hand.

So things are rolling along.  I’ve been writing tons of tests, they are all passing, things are going well, and it’s been fun.

But if you have any flair for the dramatic, you can see where this is going….

So there I was rolling along, writing tests for WeeksInAYear (bet you didn’t know that according to ISO 8601, some years have 53 weeks in them, did you.  1981 has 53 weeks, for example) Today, Yesterday – you know, normal stuff.  I’m checking edge conditions, standard conditions, all kinds of years, every year.  You know, really exercising things.  All was rolling along smoothly.

For instance, here are the tests for Yesterday.  Not too hard to test, as there is really only one thing you can do:

procedure TDateUtilsTests.Test_Yesterday; var TestResult: TDateTime; Expected : TDateTime; begin TestResult := Yesterday; Expected := IncDay(DateOf(Now), -1); CheckEquals(TestResult, Expected, 'The Yesterday function failed to return the correct value.'); TestResult := Yesterday; Expected := DateOf(Now); CheckFalse(SameDate(TestResult, Expected), 'The Yesterday function thinks Yesterday is Today, and means that Einstein was totally wrong.'); end;

Just a couple of tests that you can do – or at least what I can think of.  (Anyone have any other ideas?)  The fun part is that these tests will fail if IncDay and DateOf fail to perform as advertised, we get triple the testing!  Sweet!

Things were going along swimmingly, and then all of a sudden, out of left field, all this unit testing stuff suddenly proved to be as valuable as everyone says it is.

Here’s how it happened: I was going along, writing tests, and I wrote this one:

procedure TDateUtilsTests.Test_EndOfTheDay; var TestDate : TDateTime; TestResult: TDateTime; i : Integer; Expected : TDateTime; begin for i := 1 to 500 do begin TestDate := CreateRandomDate(False, 100, 2500); TestResult := EndOfTheDay(TestDate); // First, don't change the date CheckEquals(DayOf(TestDate), DayOf(TestResult), Format('EndOfTheDay changed the day for test date: %s (Result was: %s)', [DateTimeToStr(TestDate), DateTimeToStr(TestResult)])); // Next, is it really midnight? Expected := DateOf(TestDate); Expected := IncMillisecond(Expected, -1); Expected := IncDay(Expected); CheckTrue(SameDateTime(TestResult, Expected), Format('EndOfTheDay didn''t return midnight for test date: %s (Result was: %s, Expected was: %s)', [DateTimeToStr(DateOf(TestDate)), DateTimeToStr(TestResult), DateTimeToStr(Expected)])); end; end;

 

Pretty simple and straightforward.  But — BOOM – this thing fails. Badly.  If you run this test on your computer, the second check, the call to CheckTrue, will pretty quickly fail and you’ll get a message something like:

Test_StartEndOfTheDay: ETestFailure at  $0051FF06 EndOfTheDay didn’t return midnight for test date: 5/12/0366 (Result was: 5/12/0366 11:59:59 PM, Expected was: 5/14/0366 11:59:59 PM), expected: <True> but was: <False>

Since the test is creating random dates, you’ll never get the exact same error, but pretty soon I figured out that it only failed for dates before the epoch – that is, for dates that have a negative value and are thus earlier than 30 December 1899. 

Naturally, I was left scratching my head.  The first inclination is that the test is somehow not correct. But I stared at it for a good long while and came to the conclusion that the test wasn’t the problem. 

The first check is fine – the call to EndOfTheDay doesn’t actually change the date as it shouldn’t.  But the second test is where the trouble started. 

EndOfTheDay is a pretty simple function;  it returns the very last millisecond of the date for the date/time combination passed to it – that is, 11:59.999pm for the day in question. It is implemented like so:

// From DateUtils.pas function EndOfTheDay(const AValue: TDateTime): TDateTime; begin Result := RecodeTime(AValue, 23, 59, 59, 999); end;

So the natural thing is to actually check to see if the result is indeed that value.  So, I did the natural thing:  I set the expected date to midnight on the date of the value to be tested, decremented one millisecond, and since that changed the date back one day, I moved it forward again with IncDay.  Then I checked to see if they were indeed the same date/time combination.  Well, guess what.  They weren’t. 

I originally had a single line of code combining the three that set the value for Expected.  A quick look at the debugger told me that the Expected result wasn’t getting properly calculated.  Breaking it down quickly pointed to a strange phenomenon:  for dates before the epoch, the IncMillisecond call was actually moving the date portion forward  by two days if the date was before the epoch.  (Mysteriously, dates after epoch all worked fine.  Weird.)  That, of course, is a big bad bug. 

And this is the part where using the library itself to test other parts of the library is helpful.  Because I used IncMillisecond in my test for EndOfTheDay, I found a bug in IncMillisecond. If I hadn’t done so, the problem might have been left lurking for a while longer.  Or maybe it never would have revealed itself, depending on how diligent my testing of it ended up once I actually got there. 

Luckily, it would appear that not too many of you are manipulating milliseconds for dates before the epoch, because there hasn’t been a big hue and cry about this problem. There have been some QC reports about it, though.  But clearly something is dreadfully wrong here. 

In the next post, we’ll take a look at just what that is.

Categories: News, Blogs, and Tips

Random Thoughts on the Passing Scene #153

Nick Hodges - Wed, 03/31/2010 - 17:03
  • Andreano has a new blog, or at least it is new to me – just found it today.  He has two items there that caught my eye:
  • The inestimable Mike Rozlog is on a hot streak, and he continues to give cool and interesting Webinars.  His latest is “Mastering Database Application Development with Delphi”.  Once again, he’ll be giving the webinar at three times during the day of 14 April 2010, so no matter where you are in the world, you should be able to attend one of them. 
  • These marketing people are busy.  What to know about RAD In Action regarding building database applications?  There’s a web page for that
  • Julian Bucknall, the CTODX (Chief Technology Officer of Developer Express) has an update for their VCL customers.
  • My mentioning of our move caused a bit of a stir in the comment section.   A couple of more thoughts on it.  I can only speak for myself, but so far it is working out pretty well.  I like the new space.  I like my cube.  I like that everyone is fairly close together but not too close.  Our previous space was waaaaay  to big for us, and you could go weeks without seeing someone from Sales or Support.  Now, we are are all in one space, and it feels more like we are one team, which of course we are.  I like that this new place is a good fit.  I like that this place is significantly more appropriate and significantly less expensive that our previous space.  I like that this place is a new start. I like that we have a gigabit network.  I like that we have projectors hanging from the ceilings in the conference rooms.  I like that it is closer to the shopping mall across the street. But most of all, I like that it represents a significant investment in and commitment to our team. So for me, this is a big win.
Categories: News, Blogs, and Tips

Fun With Testing DateUtils.pas #3

Nick Hodges - Tue, 03/30/2010 - 19:20

Okay, things have settled down again, and it is time to get back to my adventure in TDateTime and DateUtils.pas.

When we last left off, I had started at the top of DateUtils, and just started working my way down.  I had written some tests for DateOf and TimeOf, and tried to write tests that pretty thoroughly exercised those functions.  I tried to hit the edges and boundaries, and to test all the different permutations and combinations of a date only, a time only, and both together. 

From there, I worked my way down the list, writing tests for IsLeapYear, IsPM, etc. 

One thing I did was to add IsAM to DateUtils.pas and simply implemented it as:

function IsAM(const AValue: TDateTime): Boolean; begin Result := not IsPM(AValue); end;

Now, that is really simple.  Shoot, you don’t really need to write tests for that, right?  I mean, I wrote a whole suite of tests for IsPM, and so how could IsAM go wrong? Well, any number of ways – but the main one is that some day in the future, someone might come along and try to get cute or super-smart or something and change the implementation.  So I went ahead and wrote a whole bunch of tests for IsAM anyway.  Now, if something changes, or if someone changes something, the tests should be able to recognize that. 

Philosophical Note: As I’m doing this, I’m seeing more clearly than ever that writing tests is all about confidence moving forward.  Once you have taken the effort to write thorough, complete suites of unit tests, you can move forward with confidence.  You can make changes and fixes while feeling confident that if your change has unintended consequences, you’ll likely know about it. If you do find a bug, you write a test that “reveals” it, fix the bug so the test passes, and then you can move forward confident that you’ll know right away if that bug comes back to haunt you.  Confidence is a really good thing when it comes to writing code.

So, for instance, let’s look at the tests for IsInLeapYear.  Leap years are a bit funky.  Some years that you think are leap years are not – Quick:  Was 1600 a leap year?  What about 1900?  Wikipedia actually has a good page on leap years.  (Did you know that leap years are also called “intercalary years”? I sure didn’t.)  The actual calculation of a leap year is a bit more complicated that “Is it divisible by 4?”. 

function IsLeapYear(Year: Word): Boolean; begin Result := (Year mod 4 = 0) and ((Year mod 100 <> 0) or (Year mod 400 = 0)); end;

Examine the code, you can see that the answer to the questions above are Yes and No.  (As a side note, our QA Manager is a “Leapling”, born on February 29th.  He’s really only 12 years old.)

So, how do you test something called IsInLeapYear?  The declaration is actually quite simple:

function IsInLeapYear(const AValue: TDateTime): Boolean; begin Result := IsLeapYear(YearOf(AValue)); end;

But just because it is simple doesn’t mean that you shouldn’t thoroughly test it!  So I wrote a whole bunch of tests. First, I checked that random dates in years I know are leap years were properly identified as being in a leap year:

TestDate := EncodeDate(1960, 2, 29); TestResult := IsInLeapYear(TestDate); CheckTrue(TestResult, Format('%s is in a leap year, but IsInLeapYear says that it isn''t. Test #1', [DateToStr(TestDate)])); TestDate := EncodeDate(2000, 7, 31); TestResult := IsInLeapYear(TestDate); CheckTrue(TestResult, Format('%s is in a leap year, but IsInLeapYear says that it isn''t. Test #2', [DateToStr(TestDate)])); TestDate := EncodeDate(1600, 7, 31); TestResult := IsInLeapYear(TestDate); CheckTrue(TestResult, Format('%s is in a leap year, but IsInLeapYear says that it isn''t. Test #4', [DateToStr(TestDate)])); TestDate := EncodeDate(1972, 4, 5); TestResult := IsInLeapYear(TestDate); CheckTrue(TestResult, Format('%s is in a leap year, but IsInLeapYear says that it isn''t. Test #5', [DateToStr(TestDate)])); TestDate := EncodeDate(1888, 2, 29); TestResult := IsInLeapYear(TestDate); CheckTrue(TestResult, Format('%s is in a leap year, but IsInLeapYear says that it isn''t. Test #7', [DateToStr(TestDate)])); TestDate := EncodeDate(2400, 2, 29); TestResult := IsInLeapYear(TestDate); CheckTrue(TestResult, Format('%s is in a leap year, but IsInLeapYear says that it isn''t. Test #8', [DateToStr(TestDate)]));

Note that I checked "normal" dates, but also dates in the far future (including the tricky 2400) as well as dates before the epoch (which is December 30, 1899, or a datetime value of 0.0). I’ll talk a little more about the epoch in a future post because the epoch is really, really important to TDateTime. It is also really, really troublesome. 

Another thing to note is that this code uses (and thus tests) EncodeDate. And IsInLeapYear itself will exercise YearOf and IsLeapYear indirectly.  If a test in IsInLeapYear fails indirectly because of one of these, you’ll be able to figure that out pretty quickly, write tests specifically to reveal those problems, fix the problems, and then move forward with confidence that you’ve resolved the issues.

Anyway, I also wrote some negative test cases, checking to see that it returned False for dates that most definitely were not in leap years.   I also wrote tests for dates in years that many folks might thing are leap years but are in fact not leap years:

// Years that end in 00 are /not/ leap years, unless divisible by 400 TestDate := EncodeDate(1700, 2, 28); TestResult := IsInLeapYear(TestDate); CheckFalse(TestResult, Format('%s is in a leap year, but IsInLeapYear says that it isn''t. Test #6', [DateToStr(TestDate)])); TestDate := EncodeDate(1900, 2, 28); TestResult := IsInLeapYear(TestDate); CheckFalse(TestResult, Format('%s is in a leap year, but IsInLeapYear says that it isn''t. Test #7', [DateToStr(TestDate)]));

Now that might seem like overkill for a simple function like IsInLeapYear, but I don’t think so. I am now really confident that, since we will be running these tests almost continuously on our Hudson server, no one can mess or alter or change or otherwise break the way leap years are calculated without us knowing about it immediately. And that’s sort of the whole point, right?

Categories: News, Blogs, and Tips

Random Thoughts on the Passing Scene #152

Nick Hodges - Tue, 03/23/2010 - 14:08
  • As you may know, we are moving to a new office very nearby where we are now.  We got to visit the new digs today.  Friday is moving day, and we’ll be in the new place for a great new start on Monday morning.  I’m excited – I think that the move will be a great fresh start for us in a new place.  We’ll be moving from offices to cubes, so that will be a bit of a culture shift and an adjustment, but it should be great.  Pursuant to the move, of course, we have been and will be migrating servers to our new location.  So if things aren’t working 100% correctly, give it a bit and try again.  If things are persistently not working, then let us know.  Our IT team is working very hard to make sure that the migration goes smoothly, but there will inevitably be hiccups along the way. Your patience and understanding are appreciated.
  • From time to time, people ask how to make a deep copy of an existing instance of a class.  Well, using the new, super cool RTTI, Alex is on the case.
  • Our Haiti Auction got written up in the San Jose Mercury News.  Nice!
  • New Delphi site offering specials on Delphi components: http://www.delphiday.com/
  • One of the best parts about Delphi is the awesome community, and one of the best parts of the community is the JEDI team.  These guys are awesome, and provide and an incredible amount of value to all of us.  They had some newsgroups at forums.talko.net that have apparently stopped working. As a result, they have a new server at news.delphi-jedi.org where you can point your NNTP newsreader on port 119. 
Categories: News, Blogs, and Tips

Random Thoughts on the Passing Scene #151

Nick Hodges - Thu, 03/18/2010 - 16:22
Categories: News, Blogs, and Tips