dcsimg
Recommendations on test groupings with JUnit?
1 posts in topic
Flat View  Flat View
TOPIC ACTIONS:
 

Posted By:   Joseph_Hobbs
Posted On:   Monday, February 4, 2008 10:47 AM

Does anyone have any recommendations on how they arrange/group their jUnit tests? Should there be one large 'test' for each method, one small 'test' for each method possibility, etc? So, as an example... Assume the following method: int ReportManager.createReport(Report report) In this case, I'd want to test for the following: Does it throw an exception if report == null ? Does it throw an exception if report == empty ? Does it throw an exception if report is duplicate? Does it return valid ID if successful? So given those 4 possibilities, how would you do it? Would you create a single test method (testCr   More>>

Does anyone have any recommendations on how they arrange/group their jUnit tests? Should there be one large 'test' for each method, one small 'test' for each method possibility, etc?



So, as an example... Assume the following method:



			
int ReportManager.createReport(Report report)


In this case, I'd want to test for the following:




  • Does it throw an exception if report == null ?

  • Does it throw an exception if report == empty ?

  • Does it throw an exception if report is duplicate?

  • Does it return valid ID if successful?



So given those 4 possibilities, how would you do it? Would you create a single test method (testCreateReport) that tests all 4? Or would you create four separate test methods (testCreateReportNull, testCreateReportEmpty, testCreateReportDuplicate, TestCreateReportSuccess)?



Thanks!

   <<Less

Re: Recommendations on test groupings with JUnit?

Posted By:   Robert_Lybarger  
Posted On:   Monday, February 4, 2008 08:05 PM

Separate methods for each specific case in question. Actually, the slightly more formal way the devs at my workplace do things is to first carefully define all the input requirements for the function/method under consideration, starting with the easy things (null) and going down into the stuff that takes longer to validate. With this list of requirements agreed upon and documented, the unit test includes at least one "negative" unit test to be sure that a violation of each requirement, on its own, generates the proper error response (return type, exception, whatever). The requirements are mainly just an enumeration in a version-controlled file somewhere like:

  1. The input string argument shall not be null.

  2. The input string argument shall not be the empty string.

  3. The input string must successfully parse to a date using the "MMddyyyy" format.

  4. ...etc...


This would have three (or etc...) methods for the negative conditions, and at least one more to test the positive condition -- that is, to make sure the success condition works. As the requirements are updated, the corresponding unit tests are also updated, and the fall-out of the code change on other unit tests -- hopefully none -- is analyzed at the next build cycle.
About | Sitemap | Contact