1N/A# -*- Mode: cperl; cperl-indent-level: 4 -*- 1N/ATest::Harness - Run Perl standard test scripts with statistics 1N/A# Backwards compatibility for exportable variable names. 1N/A# Some experimental versions of OS/2 build have broken $? 1N/A$
Columns--;
# Some shells have trouble with a full line of text. 1N/A runtests(@test_files); 1N/AB<STOP!> If all you want to do is write a test script, consider using 1N/ATest::Simple. Otherwise, read on. 1N/A(By using the Test module, you can write test scripts without 1N/Aknowing the exact output this module expects. However, if you need to 1N/Aknow the specifics, read on!) 1N/APerl test scripts print to standard output C<"ok N"> for each single 1N/Atest, where C<N> is an increasing sequence of integers. The first line 1N/Aoutput by a standard test script is C<"1..M"> with C<M> being the 1N/Anumber of tests that should be run within the test 1N/Ascript. Test::Harness::runtests(@tests) runs all the testscripts 1N/Anamed as arguments and checks standard output for the expected 1N/AAfter all tests have been performed, runtests() prints some 1N/Aperformance statistics that are computed by the Benchmark module. 1N/A=head2 The test script output 1N/AThe following explains how Test::Harness interprets the output of your 1N/AThis header tells how many tests there will be. For example, C<1..10> 1N/Ameans you plan on running 10 tests. This is a safeguard in case your 1N/Atest dies quietly in the middle of its run. 1N/AIt should be the first non-comment line output by your test program. 1N/AIn certain instances, you may not know how many tests you will 1N/Aultimately be running. In this case, it is permitted for the 1..M 1N/Aheader to appear as the B<last> line output by your test (again, it 1N/Acan be followed by further comments). 1N/AUnder B<no> circumstances should 1..M appear in the middle of your 1N/Aoutput or more than once. 1N/A=item B<'ok', 'not ok'. Ok?> 1N/AAny output from the testscript to standard error is ignored and 1N/Abypassed, thus will be seen by the user. Lines written to standard 1N/Aoutput containing C</^(not\s+)?ok\b/> are interpreted as feedback for 1N/Aruntests(). All other lines are discarded. 1N/AC</^not ok/> indicates a failed test. C</^ok/> is a successful test. 1N/A=item B<test numbers> 1N/APerl normally expects the 'ok' or 'not ok' to be followed by a test 1N/Anumber. It is tolerated if the test numbers after 'ok' are 1N/Aomitted. In this case Test::Harness maintains temporarily its own 1N/Acounter until the script supplies test numbers again. So the following 1N/A FAILED tests 1, 3, 6 1N/A Failed 3/6 tests, 50.00% okay 1N/AAnything after the test number but before the # is considered to be 1N/Athe name of the test. 1N/A ok 42 this is the name of the test 1N/ACurrently, Test::Harness does nothing with this information. 1N/A=item B<Skipping tests> 1N/AIf the standard output line contains the substring C< # Skip> (with 1N/Avariations in spacing and case) after C<ok> or C<ok NUMBER>, it is 1N/Acounted as a skipped test. If the whole testscript succeeds, the 1N/Acount of skipped tests is included in the generated output. 1N/AC<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason 1N/A ok 23 # skip Insufficient flogiston pressure. 1N/ASimilarly, one can include a similar explanation in a C<1..0> line 1N/Aemitted if the test script is skipped completely: 1N/A 1..0 # Skipped: no leverage found 1N/AIf the standard output line contains the substring C< # TODO > after 1N/AC<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text 1N/Aafterwards is the thing that has to be done before this test will 1N/A not ok 13 # TODO harness the power of the atom 1N/ANote that the TODO must have a space after it. 1N/AAlternatively, you can specify a list of what tests are todo as part 1N/AThis only works if the header appears at the beginning of the test. 1N/AThis style is B<deprecated>. 1N/AThese tests represent a feature to be implemented or a bug to be fixed 1N/Aand act as something of an executable "thing to do" list. They are 1N/AB<not> expected to succeed. Should a todo test begin succeeding, 1N/ATest::Harness will report it as a bonus. This indicates that whatever 1N/Ayou were supposed to do has been done and you should promote this to a 1N/AAs an emergency measure, a test script can decide that further tests 1N/Aare useless (e.g. missing dependencies) and testing should stop 1N/Aimmediately. In that case the test script prints the magic words 1N/Ato standard output. Any message after these words will be displayed by 1N/AC<Test::Harness> as the reason why testing is stopped. 1N/AAdditional comments may be put into the testing output on their own 1N/Alines. Comment lines should begin with a '#', Test::Harness will 1N/A # Life is good, the sun is shining, RAM is cheap. 1N/A # got 'Bush' expected 'Gore' 1N/A=item B<Anything else> 1N/AAny other output Test::Harness sees it will silently ignore B<BUT WE 1N/APLAN TO CHANGE THIS!> If you wish to place additional output in your 1N/Atest script, please use a comment. 1N/ATest::Harness will honor the C<-T> or C<-t> in the #! line on your 1N/Atest files. So if you begin a test with: 1N/Athe test will be run with taint mode on. 1N/A=head2 Configuration variables. 1N/AThese variables can be used to configure the behavior of 1N/ATest::Harness. They are exported on request. 1N/A=item B<$Test::Harness::Verbose> 1N/AThe global variable C<$Test::Harness::Verbose> is exportable and can be 1N/Aused to let C<runtests()> display the standard output of the script 1N/Awithout altering the behavior otherwise. The F<prove> utility's C<-v> 1N/A=item B<$Test::Harness::switches> 1N/AThe global variable C<$Test::Harness::switches> is exportable and can be 1N/Aused to set perl command line options used for running the test 1N/Ascript(s). The default value is C<-w>. It overrides C<HARNESS_SWITCHES>. 1N/AIt will happen: your tests will fail. After you mop up your ego, you 1N/Acan begin examining the summary report: 1N/A Test returned status 3 (wstat 768, 0x300) 1N/A DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19 1N/A Failed 10/20 tests, 50.00% okay 1N/A Failed Test Stat Wstat Total Fail Failed List of Failed 1N/A ----------------------------------------------------------------------- 1N/A Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay. 1N/Aexited with non-zero status indicating something dubious happened. 1N/AThe columns in the summary report mean: 1N/AThe test file which failed. 1N/AIf the test exited with non-zero, this is its exit status. 1N/AThe wait status of the test. 1N/ATotal number of tests expected to run. 1N/ANumber which failed, either from "not ok" or because they never ran. 1N/APercentage of the total tests which failed. 1N/A=item B<List of Failed> 1N/AA list of the tests which failed. Successive failures may be 1N/Aabbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and 1N/ATest::Harness currently only has one function, here it is. 1N/A my $allok = runtests(@test_files); 1N/AThis runs all the given @test_files and divines whether they passed 1N/Aor failed based on their output to STDOUT (details above). It prints 1N/Aout each individual test which failed along with a summary report and 1N/Aa how long it all took. 1N/AIt returns true if everything was ok. Otherwise it will die() with 1N/Aone of the messages in the DIAGNOSTICS section. 1N/AThis is just _run_all_tests() plus _show_results() 1N/A my $ok = _all_ok(\%tot); 1N/ATells you if this test run is overall successful or not. 1N/A my @files = _globdir $dir; 1N/AReturns all the files in a directory. This is shorthand for backwards 1N/Acompatibility on systems where glob() doesn't work right. 1N/A=item B<_run_all_tests> 1N/A my($total, $failed) = _run_all_tests(@test_files); 1N/ARuns all the given C<@test_files> (as C<runtests()>) but does it 1N/Aquietly (no report). $total is a hash ref summary of all the tests 1N/Arun. Its keys and values are this: 1N/A bonus Number of individual todo tests unexpectedly passed 1N/A max Number of individual tests ran 1N/A ok Number of individual tests passed 1N/A sub_skipped Number of individual tests skipped 1N/A todo Number of individual todo tests 1N/A files Number of test files ran 1N/A good Number of test files passed 1N/A bad Number of test files failed 1N/A tests Number of test files originally given 1N/A skipped Number of test files skipped 1N/AIf C<< $total->{bad} == 0 >> and C<< $total->{max} > 0 >>, you've 1N/Agot a successful test. 1N/A$failed is a hash ref of all the test scripts which failed. Each key 1N/Ais the name of a test script, each value is another hash representing 1N/Ahow that script failed. Its keys are these: 1N/A name Name of the test which failed 1N/A estat Script's exit value 1N/A wstat Script's wait status 1N/A max Number of individual tests 1N/A failed Number which failed 1N/A percent Percentage of tests which failed 1N/A canon List of tests which failed (as string). 1N/AC<$failed> should be empty if everything passed. 1N/AB<NOTE> Currently this function is still noisy. I'm working on it. 1N/A # state of the current test. 1N/A push(@
msg,
"$test{skipped}/$test{max} skipped: $test{skip_reason}")
1N/A push(@
msg,
"$test{bonus}/$test{max} unexpectedly succeeded")
1N/A print "$test{ml}ok\n ".
join(
', ', @
msg).
"\n";
1N/A print "$test{ml}ok\n";
1N/A print "skipped\n all skipped: $test{skip_all}\n";
1N/A print "skipped\n all skipped: no reason given\n";
1N/A # List unrun tests as failures. 1N/A # List overruns as failures. 1N/A print "$test{ml}$txt";
1N/A print "Don't know which tests failed: got $test{ok} ok, ".
1N/A "expected $test{max}\n";
1N/A print "FAILED before any test output arrived\n";
1N/A my @f =
sort keys %f;
1N/A print "LEAKED FILES: @f\n";
1N/A my($leader, $ml) = _mk_leader($test_file, $width); 1N/AGenerates the 't/foo........' $leader for the given C<$test_file> as well 1N/Aas a similar version which will overwrite the current line (by use of 1N/A\r and such). C<$ml> may be empty if Test::Harness doesn't think you're 1N/A if ($^O
eq 'VMS') { $
te =~ s/^.*\.t\./\[.t./s; }
1N/A $
ml =
"\r$blank\r$leader" 1N/A=item B<_leader_width> 1N/A my($width) = _leader_width(@test_files); 1N/ACalculates how wide the leader should be based on the length of the 1N/A my $
suf = /\.(\w+)$/ ? $
1 :
'';
1N/A # + 3 : we want three dots between the test name and the "ok" 1N/A print "All tests successful$bonusmsg.\n";
1N/A die "FAILED--no tests were run for some reason.\n";
1N/A die "FAILED--$tot->{tests} test $blurb could be run, ".
1N/A "alas--no output ever seen\n";
1N/A my $
subpct =
sprintf " %d/%d subtests failed, %.2f%% okay.",
1N/A # Now write to formats 1N/A die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
1N/A printf(
"Files=%d, Tests=%d, %s\n",
1N/A warn "1..M can only appear at the beginning or end of tests\n" 1N/A print "Test output counter mismatch [test $curr]\n";
1N/A print "Confused test output: test $curr answered after ".
1N/A die "FAILED--Further testing stopped" .
1N/A# For slow connections, we save lots of bandwidth by printing only once 1N/A " UNEXPECTEDLY SUCCEEDED)")
1N/A# Test program go boom. 1N/A printf "$test->{ml}dubious\n\tTest returned status $estatus ".
1N/A "(wstat %d, 0x%x)\n",
1N/A print "\t\t(VMS status is $estatus)\n" if $^O
eq 'VMS';
1N/A print "\ttest program seems to have generated a core\n";
1N/A print "\tafter all the subtests completed successfully\n";
1N/A # Figure out our longest name string for formatting purposes. 1N/A .
" @>> @>>>> @>>>> @>>> ^##.##% " 1N/A .
'{ $Curtest->{name}, $Curtest->{estat},' 1N/A .
' $Curtest->{wstat}, $Curtest->{max},' 1N/A .
' $Curtest->{failed}, $Curtest->{percent},' 1N/A .
' $Curtest->{canon}' 1N/A .
'$Curtest->{canon}' 1N/A eval {
# we may not have a WCOREDUMP 1N/A local $^W =
0;
# *.ph files are often *very* noisy 1N/A push @
result,
"\tFailed $failed/$max tests, ";
1N/A my $
skipmsg =
" (less $skipped skipped test$ender: $good okay, ";
1N/AC<&runtests> is exported by Test::Harness by default. 1N/AC<$verbose>, C<$switches> and C<$debug> are exported upon request. 1N/A=item C<All tests successful.\nFiles=%d, Tests=%d, %s> 1N/AIf all tests are successful some statistics about the performance are 1N/A=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.> 1N/AFor any single script that has failing subtests statistics like the 1N/A=item C<Test returned status %d (wstat %d)> 1N/AScripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8> 1N/Aand C<$?> are printed in a message similar to the above. 1N/A=item C<Failed 1 test, %.2f%% okay. %s> 1N/A=item C<Failed %d/%d tests, %.2f%% okay. %s> 1N/AIf not all tests were successful, the script dies with one of the 1N/A=item C<FAILED--Further testing stopped: %s> 1N/AIf a single subtest decides that further testing will not make sense, 1N/Athe script dies with this message. 1N/A=item C<HARNESS_ACTIVE> 1N/AHarness sets this before executing the individual tests. This allows 1N/Athe tests to determine if they are being executed through the harness 1N/Aor by any other means. 1N/A=item C<HARNESS_COLUMNS> 1N/AThis value will be used for the width of the terminal. If it is not 1N/Aset then it will default to C<COLUMNS>. If this is not set, it will 1N/Adefault to 80. Note that users of Bourne-sh based shells will need to 1N/AC<export COLUMNS> for this module to use that variable. 1N/A=item C<HARNESS_COMPILE_TEST> 1N/AWhen true it will make harness attempt to compile the test using 1N/AC<perlcc> before running it. 1N/AB<NOTE> This currently only works when sitting in the perl source 1N/A=item C<HARNESS_DEBUG> 1N/AIf true, Test::Harness will print debugging information about itself as 1N/Ait runs the tests. This is different from C<HARNESS_VERBOSE>, which prints 1N/Athe output from the test being run. Setting C<$Test::Harness::Debug> will 1N/Aoverride this, or you can use the C<-d> switch in the F<prove> utility. 1N/A=item C<HARNESS_FILELEAK_IN_DIR> 1N/AWhen set to the name of a directory, harness will check after each 1N/Atest whether new files appeared in that directory, and report them as 1N/A LEAKED FILES: scr.tmp 0 my.db 1N/AIf relative, directory name is with respect to the current directory at 1N/Athe moment runtests() was called. Putting absolute path into 1N/AC<HARNESS_FILELEAK_IN_DIR> may give more predictable results. 1N/A=item C<HARNESS_IGNORE_EXITCODE> 1N/AMakes harness ignore the exit status of child processes when defined. 1N/A=item C<HARNESS_NOTTY> 1N/AWhen set to a true value, forces it to behave as though STDOUT were 1N/Anot a console. You may need to set this if you don't want harness to 1N/Aoutput more frequent progress messages using carriage returns. Some 1N/Aconsoles may not handle carriage returns properly (which results in a 1N/Asomewhat messy output). 1N/A=item C<HARNESS_OK_SLOW> 1N/AIf true, the C<ok> messages are printed out only every second. This 1N/Areduces output and may help increase testing speed over slow 1N/Aconnections, or with very large numbers of tests. 1N/A=item C<HARNESS_PERL> 1N/AUsually your tests will be run by C<$^X>, the currently-executing Perl. 1N/AHowever, you may want to have it run by a different executable, such as 1N/Aa threading perl, or a different version. 1N/AIf you're using the F<prove> utility, you can use the C<--perl> switch. 1N/A=item C<HARNESS_PERL_SWITCHES> 1N/AIts value will be prepended to the switches used to invoke perl on 1N/Aeach test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will 1N/Arun all tests with all warnings enabled. 1N/A=item C<HARNESS_VERBOSE> 1N/AIf true, Test::Harness will output the verbose results of running 1N/Aits tests. Setting C<$Test::Harness::verbose> will override this, 1N/Aor you can use the C<-v> switch in the F<prove> utility. 1N/AHere's how Test::Harness tests itself 1N/A $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose); 1N/A $verbose=0; runtests @ARGV;' t/*.t 1N/A All tests successful. 1N/A Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU) 1N/AL<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for 1N/Athe underlying timing routines, L<Devel::CoreStack> to generate core 1N/Adumps from failed tests and L<Devel::Cover> for test coverage 1N/AEither Tim Bunce or Andreas Koenig, we don't know. What we know for 1N/Asure is, that it was inspired by Larry Wall's TEST script that came 1N/Awith perl distributions for ages. Numerous anonymous contributors 1N/Aexist. Andreas Koenig held the torch for many years, and then 1N/ACurrent maintainer is Andy Lester C<< <andy@petdance.com> >>. 1N/AThis program is free software; you can redistribute it and/or 1N/Amodify it under the same terms as Perl itself. 1N/AProvide a way of running tests quietly (ie. no printing) for automated 1N/Avalidation of tests. This will probably take the form of a version 1N/Aof runtests() which rather than printing its output returns raw data 1N/Aon the state of the tests. (Partially done in Test::Harness::Straps) 1N/AFix HARNESS_COMPILE_TEST without breaking its core usage. 1N/AFigure a way to report test names in the failure summary. 1N/ARework the test summary so long test names are not truncated as badly. 1N/A(Partially done with new skip test styles) 1N/ADeal with VMS's "not \nok 4\n" mistake. 1N/AAdd option for coverage analysis. 1N/AImplement Straps total_results() 1N/ACompletely redo the print summary code. 1N/AImplement Straps callbacks. (experimentally implemented) 1N/AStraps->analyze_file() not taint clean, don't know if it can be 1N/AFix that damned VMS nit. 1N/AHARNESS_TODOFAIL to display TODO failures 1N/AAdd a test for verbose. 1N/AChange internal list of test results to a hash. 1N/AFix stats display when there's an overrun. 1N/AFix so perls with spaces in the filename work. 1N/AKeeping whittling away at _run_all_tests() 1N/AClean up how the summary is printed. Get rid of those damned formats. 1N/AHARNESS_COMPILE_TEST currently assumes it's run from the Perl source 1N/AYou can also mail bugs, fixes and enhancements to 1N/AC<< <bug-test-harness@rt.cpan.org> >>. 1N/AOriginal code by Michael G Schwern, maintained by Andy Lester. 1N/ACopyright 2003 by Michael G Schwern C<< <schwern@pobox.com> >>, 1N/A Andy Lester C<< <andy@petdance.com> >>. 1N/AThis program is free software; you can redistribute it and/or 1N/Amodify it under the same terms as Perl itself.