1N/A# -*- Mode: cperl; cperl-indent-level: 4 -*-
1N/A# $Id: Harness.pm,v 1.80 2003/12/31 02:39:21 andy Exp $
1N/A
1N/Apackage Test::Harness;
1N/A
1N/Arequire 5.004;
1N/Ause Test::Harness::Straps;
1N/Ause Test::Harness::Assert;
1N/Ause Exporter;
1N/Ause Benchmark;
1N/Ause Config;
1N/Ause strict;
1N/A
1N/Ause vars qw(
1N/A $VERSION
1N/A @ISA @EXPORT @EXPORT_OK
1N/A $Verbose $Switches $Debug
1N/A $verbose $switches $debug
1N/A $Have_Devel_Corestack
1N/A $Curtest
1N/A $Columns
1N/A $ML $Last_ML_Print
1N/A $Strap
1N/A);
1N/A
1N/A=head1 NAME
1N/A
1N/ATest::Harness - Run Perl standard test scripts with statistics
1N/A
1N/A=head1 VERSION
1N/A
1N/AVersion 2.40
1N/A
1N/A $Header: /home/cvs/test-harness/lib/Test/Harness.pm,v 1.80 2003/12/31 02:39:21 andy Exp $
1N/A
1N/A=cut
1N/A
1N/A$VERSION = '2.40';
1N/A
1N/A# Backwards compatibility for exportable variable names.
1N/A*verbose = *Verbose;
1N/A*switches = *Switches;
1N/A*debug = *Debug;
1N/A
1N/A$Have_Devel_Corestack = 0;
1N/A
1N/A$ENV{HARNESS_ACTIVE} = 1;
1N/A
1N/AEND {
1N/A # For VMS.
1N/A delete $ENV{HARNESS_ACTIVE};
1N/A}
1N/A
1N/A# Some experimental versions of OS/2 build have broken $?
1N/Amy $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
1N/A
1N/Amy $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
1N/A
1N/Amy $Ok_Slow = $ENV{HARNESS_OK_SLOW};
1N/A
1N/A$Strap = Test::Harness::Straps->new;
1N/A
1N/A@ISA = ('Exporter');
1N/A@EXPORT = qw(&runtests);
1N/A@EXPORT_OK = qw($verbose $switches);
1N/A
1N/A$Verbose = $ENV{HARNESS_VERBOSE} || 0;
1N/A$Debug = $ENV{HARNESS_DEBUG} || 0;
1N/A$Switches = "-w";
1N/A$Columns = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
1N/A$Columns--; # Some shells have trouble with a full line of text.
1N/A
1N/A=head1 SYNOPSIS
1N/A
1N/A use Test::Harness;
1N/A
1N/A runtests(@test_files);
1N/A
1N/A=head1 DESCRIPTION
1N/A
1N/AB<STOP!> If all you want to do is write a test script, consider using
1N/ATest::Simple. Otherwise, read on.
1N/A
1N/A(By using the Test module, you can write test scripts without
1N/Aknowing the exact output this module expects. However, if you need to
1N/Aknow the specifics, read on!)
1N/A
1N/APerl test scripts print to standard output C<"ok N"> for each single
1N/Atest, where C<N> is an increasing sequence of integers. The first line
1N/Aoutput by a standard test script is C<"1..M"> with C<M> being the
1N/Anumber of tests that should be run within the test
1N/Ascript. Test::Harness::runtests(@tests) runs all the testscripts
1N/Anamed as arguments and checks standard output for the expected
1N/AC<"ok N"> strings.
1N/A
1N/AAfter all tests have been performed, runtests() prints some
1N/Aperformance statistics that are computed by the Benchmark module.
1N/A
1N/A=head2 The test script output
1N/A
1N/AThe following explains how Test::Harness interprets the output of your
1N/Atest program.
1N/A
1N/A=over 4
1N/A
1N/A=item B<'1..M'>
1N/A
1N/AThis header tells how many tests there will be. For example, C<1..10>
1N/Ameans you plan on running 10 tests. This is a safeguard in case your
1N/Atest dies quietly in the middle of its run.
1N/A
1N/AIt should be the first non-comment line output by your test program.
1N/A
1N/AIn certain instances, you may not know how many tests you will
1N/Aultimately be running. In this case, it is permitted for the 1..M
1N/Aheader to appear as the B<last> line output by your test (again, it
1N/Acan be followed by further comments).
1N/A
1N/AUnder B<no> circumstances should 1..M appear in the middle of your
1N/Aoutput or more than once.
1N/A
1N/A
1N/A=item B<'ok', 'not ok'. Ok?>
1N/A
1N/AAny output from the testscript to standard error is ignored and
1N/Abypassed, thus will be seen by the user. Lines written to standard
1N/Aoutput containing C</^(not\s+)?ok\b/> are interpreted as feedback for
1N/Aruntests(). All other lines are discarded.
1N/A
1N/AC</^not ok/> indicates a failed test. C</^ok/> is a successful test.
1N/A
1N/A
1N/A=item B<test numbers>
1N/A
1N/APerl normally expects the 'ok' or 'not ok' to be followed by a test
1N/Anumber. It is tolerated if the test numbers after 'ok' are
1N/Aomitted. In this case Test::Harness maintains temporarily its own
1N/Acounter until the script supplies test numbers again. So the following
1N/Atest script
1N/A
1N/A print <<END;
1N/A 1..6
1N/A not ok
1N/A ok
1N/A not ok
1N/A ok
1N/A ok
1N/A END
1N/A
1N/Awill generate
1N/A
1N/A FAILED tests 1, 3, 6
1N/A Failed 3/6 tests, 50.00% okay
1N/A
1N/A=item B<test names>
1N/A
1N/AAnything after the test number but before the # is considered to be
1N/Athe name of the test.
1N/A
1N/A ok 42 this is the name of the test
1N/A
1N/ACurrently, Test::Harness does nothing with this information.
1N/A
1N/A=item B<Skipping tests>
1N/A
1N/AIf the standard output line contains the substring C< # Skip> (with
1N/Avariations in spacing and case) after C<ok> or C<ok NUMBER>, it is
1N/Acounted as a skipped test. If the whole testscript succeeds, the
1N/Acount of skipped tests is included in the generated output.
1N/AC<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
1N/Afor skipping.
1N/A
1N/A ok 23 # skip Insufficient flogiston pressure.
1N/A
1N/ASimilarly, one can include a similar explanation in a C<1..0> line
1N/Aemitted if the test script is skipped completely:
1N/A
1N/A 1..0 # Skipped: no leverage found
1N/A
1N/A=item B<Todo tests>
1N/A
1N/AIf the standard output line contains the substring C< # TODO > after
1N/AC<not ok> or C<not ok NUMBER>, it is counted as a todo test. The text
1N/Aafterwards is the thing that has to be done before this test will
1N/Asucceed.
1N/A
1N/A not ok 13 # TODO harness the power of the atom
1N/A
1N/ANote that the TODO must have a space after it.
1N/A
1N/A=begin _deprecated
1N/A
1N/AAlternatively, you can specify a list of what tests are todo as part
1N/Aof the test header.
1N/A
1N/A 1..23 todo 5 12 23
1N/A
1N/AThis only works if the header appears at the beginning of the test.
1N/A
1N/AThis style is B<deprecated>.
1N/A
1N/A=end _deprecated
1N/A
1N/AThese tests represent a feature to be implemented or a bug to be fixed
1N/Aand act as something of an executable "thing to do" list. They are
1N/AB<not> expected to succeed. Should a todo test begin succeeding,
1N/ATest::Harness will report it as a bonus. This indicates that whatever
1N/Ayou were supposed to do has been done and you should promote this to a
1N/Anormal test.
1N/A
1N/A=item B<Bail out!>
1N/A
1N/AAs an emergency measure, a test script can decide that further tests
1N/Aare useless (e.g. missing dependencies) and testing should stop
1N/Aimmediately. In that case the test script prints the magic words
1N/A
1N/A Bail out!
1N/A
1N/Ato standard output. Any message after these words will be displayed by
1N/AC<Test::Harness> as the reason why testing is stopped.
1N/A
1N/A=item B<Comments>
1N/A
1N/AAdditional comments may be put into the testing output on their own
1N/Alines. Comment lines should begin with a '#', Test::Harness will
1N/Aignore them.
1N/A
1N/A ok 1
1N/A # Life is good, the sun is shining, RAM is cheap.
1N/A not ok 2
1N/A # got 'Bush' expected 'Gore'
1N/A
1N/A=item B<Anything else>
1N/A
1N/AAny other output Test::Harness sees it will silently ignore B<BUT WE
1N/APLAN TO CHANGE THIS!> If you wish to place additional output in your
1N/Atest script, please use a comment.
1N/A
1N/A=back
1N/A
1N/A=head2 Taint mode
1N/A
1N/ATest::Harness will honor the C<-T> or C<-t> in the #! line on your
1N/Atest files. So if you begin a test with:
1N/A
1N/A #!perl -T
1N/A
1N/Athe test will be run with taint mode on.
1N/A
1N/A=head2 Configuration variables.
1N/A
1N/AThese variables can be used to configure the behavior of
1N/ATest::Harness. They are exported on request.
1N/A
1N/A=over 4
1N/A
1N/A=item B<$Test::Harness::Verbose>
1N/A
1N/AThe global variable C<$Test::Harness::Verbose> is exportable and can be
1N/Aused to let C<runtests()> display the standard output of the script
1N/Awithout altering the behavior otherwise. The F<prove> utility's C<-v>
1N/Aflag will set this.
1N/A
1N/A=item B<$Test::Harness::switches>
1N/A
1N/AThe global variable C<$Test::Harness::switches> is exportable and can be
1N/Aused to set perl command line options used for running the test
1N/Ascript(s). The default value is C<-w>. It overrides C<HARNESS_SWITCHES>.
1N/A
1N/A=back
1N/A
1N/A
1N/A=head2 Failure
1N/A
1N/AIt will happen: your tests will fail. After you mop up your ego, you
1N/Acan begin examining the summary report:
1N/A
1N/A t/base..............ok
1N/A t/nonumbers.........ok
1N/A t/ok................ok
1N/A t/test-harness......ok
1N/A t/waterloo..........dubious
1N/A Test returned status 3 (wstat 768, 0x300)
1N/A DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
1N/A Failed 10/20 tests, 50.00% okay
1N/A Failed Test Stat Wstat Total Fail Failed List of Failed
1N/A -----------------------------------------------------------------------
1N/A t/waterloo.t 3 768 20 10 50.00% 1 3 5 7 9 11 13 15 17 19
1N/A Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
1N/A
1N/AEverything passed but t/waterloo.t. It failed 10 of 20 tests and
1N/Aexited with non-zero status indicating something dubious happened.
1N/A
1N/AThe columns in the summary report mean:
1N/A
1N/A=over 4
1N/A
1N/A=item B<Failed Test>
1N/A
1N/AThe test file which failed.
1N/A
1N/A=item B<Stat>
1N/A
1N/AIf the test exited with non-zero, this is its exit status.
1N/A
1N/A=item B<Wstat>
1N/A
1N/AThe wait status of the test.
1N/A
1N/A=item B<Total>
1N/A
1N/ATotal number of tests expected to run.
1N/A
1N/A=item B<Fail>
1N/A
1N/ANumber which failed, either from "not ok" or because they never ran.
1N/A
1N/A=item B<Failed>
1N/A
1N/APercentage of the total tests which failed.
1N/A
1N/A=item B<List of Failed>
1N/A
1N/AA list of the tests which failed. Successive failures may be
1N/Aabbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
1N/A20 failed).
1N/A
1N/A=back
1N/A
1N/A
1N/A=head2 Functions
1N/A
1N/ATest::Harness currently only has one function, here it is.
1N/A
1N/A=over 4
1N/A
1N/A=item B<runtests>
1N/A
1N/A my $allok = runtests(@test_files);
1N/A
1N/AThis runs all the given @test_files and divines whether they passed
1N/Aor failed based on their output to STDOUT (details above). It prints
1N/Aout each individual test which failed along with a summary report and
1N/Aa how long it all took.
1N/A
1N/AIt returns true if everything was ok. Otherwise it will die() with
1N/Aone of the messages in the DIAGNOSTICS section.
1N/A
1N/A=for _private
1N/A
1N/AThis is just _run_all_tests() plus _show_results()
1N/A
1N/A=cut
1N/A
1N/Asub runtests {
1N/A my(@tests) = @_;
1N/A
1N/A local ($\, $,);
1N/A
1N/A my($tot, $failedtests) = _run_all_tests(@tests);
1N/A _show_results($tot, $failedtests);
1N/A
1N/A my $ok = _all_ok($tot);
1N/A
1N/A assert(($ok xor keys %$failedtests),
1N/A q{ok status jives with $failedtests});
1N/A
1N/A return $ok;
1N/A}
1N/A
1N/A=begin _private
1N/A
1N/A=item B<_all_ok>
1N/A
1N/A my $ok = _all_ok(\%tot);
1N/A
1N/ATells you if this test run is overall successful or not.
1N/A
1N/A=cut
1N/A
1N/Asub _all_ok {
1N/A my($tot) = shift;
1N/A
1N/A return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
1N/A}
1N/A
1N/A=item B<_globdir>
1N/A
1N/A my @files = _globdir $dir;
1N/A
1N/AReturns all the files in a directory. This is shorthand for backwards
1N/Acompatibility on systems where glob() doesn't work right.
1N/A
1N/A=cut
1N/A
1N/Asub _globdir {
1N/A opendir DIRH, shift;
1N/A my @f = readdir DIRH;
1N/A closedir DIRH;
1N/A
1N/A return @f;
1N/A}
1N/A
1N/A=item B<_run_all_tests>
1N/A
1N/A my($total, $failed) = _run_all_tests(@test_files);
1N/A
1N/ARuns all the given C<@test_files> (as C<runtests()>) but does it
1N/Aquietly (no report). $total is a hash ref summary of all the tests
1N/Arun. Its keys and values are this:
1N/A
1N/A bonus Number of individual todo tests unexpectedly passed
1N/A max Number of individual tests ran
1N/A ok Number of individual tests passed
1N/A sub_skipped Number of individual tests skipped
1N/A todo Number of individual todo tests
1N/A
1N/A files Number of test files ran
1N/A good Number of test files passed
1N/A bad Number of test files failed
1N/A tests Number of test files originally given
1N/A skipped Number of test files skipped
1N/A
1N/AIf C<< $total->{bad} == 0 >> and C<< $total->{max} > 0 >>, you've
1N/Agot a successful test.
1N/A
1N/A$failed is a hash ref of all the test scripts which failed. Each key
1N/Ais the name of a test script, each value is another hash representing
1N/Ahow that script failed. Its keys are these:
1N/A
1N/A name Name of the test which failed
1N/A estat Script's exit value
1N/A wstat Script's wait status
1N/A max Number of individual tests
1N/A failed Number which failed
1N/A percent Percentage of tests which failed
1N/A canon List of tests which failed (as string).
1N/A
1N/AC<$failed> should be empty if everything passed.
1N/A
1N/AB<NOTE> Currently this function is still noisy. I'm working on it.
1N/A
1N/A=cut
1N/A
1N/A#'#
1N/Asub _run_all_tests {
1N/A my(@tests) = @_;
1N/A local($|) = 1;
1N/A my(%failedtests);
1N/A
1N/A # Test-wide totals.
1N/A my(%tot) = (
1N/A bonus => 0,
1N/A max => 0,
1N/A ok => 0,
1N/A files => 0,
1N/A bad => 0,
1N/A good => 0,
1N/A tests => scalar @tests,
1N/A sub_skipped => 0,
1N/A todo => 0,
1N/A skipped => 0,
1N/A bench => 0,
1N/A );
1N/A
1N/A my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
1N/A my $t_start = new Benchmark;
1N/A
1N/A my $width = _leader_width(@tests);
1N/A foreach my $tfile (@tests) {
1N/A if ( $Test::Harness::Debug ) {
1N/A print "# Running: ", $Strap->_command_line($tfile), "\n";
1N/A }
1N/A
1N/A $Last_ML_Print = 0; # so each test prints at least once
1N/A my($leader, $ml) = _mk_leader($tfile, $width);
1N/A local $ML = $ml;
1N/A
1N/A print $leader;
1N/A
1N/A $tot{files}++;
1N/A
1N/A $Strap->{_seen_header} = 0;
1N/A my %results = $Strap->analyze_file($tfile) or
1N/A do { warn $Strap->{error}, "\n"; next };
1N/A
1N/A # state of the current test.
1N/A my @failed = grep { !$results{details}[$_-1]{ok} }
1N/A 1..@{$results{details}};
1N/A my %test = (
1N/A ok => $results{ok},
1N/A 'next' => $Strap->{'next'},
1N/A max => $results{max},
1N/A failed => \@failed,
1N/A bonus => $results{bonus},
1N/A skipped => $results{skip},
1N/A skip_reason => $results{skip_reason},
1N/A skip_all => $Strap->{skip_all},
1N/A ml => $ml,
1N/A );
1N/A
1N/A $tot{bonus} += $results{bonus};
1N/A $tot{max} += $results{max};
1N/A $tot{ok} += $results{ok};
1N/A $tot{todo} += $results{todo};
1N/A $tot{sub_skipped} += $results{skip};
1N/A
1N/A my($estatus, $wstatus) = @results{qw(exit wait)};
1N/A
1N/A if ($results{passing}) {
1N/A if ($test{max} and $test{skipped} + $test{bonus}) {
1N/A my @msg;
1N/A push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
1N/A if $test{skipped};
1N/A push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
1N/A if $test{bonus};
1N/A print "$test{ml}ok\n ".join(', ', @msg)."\n";
1N/A } elsif ($test{max}) {
1N/A print "$test{ml}ok\n";
1N/A } elsif (defined $test{skip_all} and length $test{skip_all}) {
1N/A print "skipped\n all skipped: $test{skip_all}\n";
1N/A $tot{skipped}++;
1N/A } else {
1N/A print "skipped\n all skipped: no reason given\n";
1N/A $tot{skipped}++;
1N/A }
1N/A $tot{good}++;
1N/A }
1N/A else {
1N/A # List unrun tests as failures.
1N/A if ($test{'next'} <= $test{max}) {
1N/A push @{$test{failed}}, $test{'next'}..$test{max};
1N/A }
1N/A # List overruns as failures.
1N/A else {
1N/A my $details = $results{details};
1N/A foreach my $overrun ($test{max}+1..@$details)
1N/A {
1N/A next unless ref $details->[$overrun-1];
1N/A push @{$test{failed}}, $overrun
1N/A }
1N/A }
1N/A
1N/A if ($wstatus) {
1N/A $failedtests{$tfile} = _dubious_return(\%test, \%tot,
1N/A $estatus, $wstatus);
1N/A $failedtests{$tfile}{name} = $tfile;
1N/A }
1N/A elsif($results{seen}) {
1N/A if (@{$test{failed}} and $test{max}) {
1N/A my ($txt, $canon) = _canonfailed($test{max},$test{skipped},
1N/A @{$test{failed}});
1N/A print "$test{ml}$txt";
1N/A $failedtests{$tfile} = { canon => $canon,
1N/A max => $test{max},
1N/A failed => scalar @{$test{failed}},
1N/A name => $tfile,
1N/A percent => 100*(scalar @{$test{failed}})/$test{max},
1N/A estat => '',
1N/A wstat => '',
1N/A };
1N/A } else {
1N/A print "Don't know which tests failed: got $test{ok} ok, ".
1N/A "expected $test{max}\n";
1N/A $failedtests{$tfile} = { canon => '??',
1N/A max => $test{max},
1N/A failed => '??',
1N/A name => $tfile,
1N/A percent => undef,
1N/A estat => '',
1N/A wstat => '',
1N/A };
1N/A }
1N/A $tot{bad}++;
1N/A } else {
1N/A print "FAILED before any test output arrived\n";
1N/A $tot{bad}++;
1N/A $failedtests{$tfile} = { canon => '??',
1N/A max => '??',
1N/A failed => '??',
1N/A name => $tfile,
1N/A percent => undef,
1N/A estat => '',
1N/A wstat => '',
1N/A };
1N/A }
1N/A }
1N/A
1N/A if (defined $Files_In_Dir) {
1N/A my @new_dir_files = _globdir $Files_In_Dir;
1N/A if (@new_dir_files != @dir_files) {
1N/A my %f;
1N/A @f{@new_dir_files} = (1) x @new_dir_files;
1N/A delete @f{@dir_files};
1N/A my @f = sort keys %f;
1N/A print "LEAKED FILES: @f\n";
1N/A @dir_files = @new_dir_files;
1N/A }
1N/A }
1N/A }
1N/A $tot{bench} = timediff(new Benchmark, $t_start);
1N/A
1N/A $Strap->_restore_PERL5LIB;
1N/A
1N/A return(\%tot, \%failedtests);
1N/A}
1N/A
1N/A=item B<_mk_leader>
1N/A
1N/A my($leader, $ml) = _mk_leader($test_file, $width);
1N/A
1N/AGenerates the 't/foo........' $leader for the given C<$test_file> as well
1N/Aas a similar version which will overwrite the current line (by use of
1N/A\r and such). C<$ml> may be empty if Test::Harness doesn't think you're
1N/Aon TTY.
1N/A
1N/AThe C<$width> is the width of the "yada/blah.." string.
1N/A
1N/A=cut
1N/A
1N/Asub _mk_leader {
1N/A my($te, $width) = @_;
1N/A chomp($te);
1N/A $te =~ s/\.\w+$/./;
1N/A
1N/A if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
1N/A my $blank = (' ' x 77);
1N/A my $leader = "$te" . '.' x ($width - length($te));
1N/A my $ml = "";
1N/A
1N/A $ml = "\r$blank\r$leader"
1N/A if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
1N/A
1N/A return($leader, $ml);
1N/A}
1N/A
1N/A=item B<_leader_width>
1N/A
1N/A my($width) = _leader_width(@test_files);
1N/A
1N/ACalculates how wide the leader should be based on the length of the
1N/Alongest test name.
1N/A
1N/A=cut
1N/A
1N/Asub _leader_width {
1N/A my $maxlen = 0;
1N/A my $maxsuflen = 0;
1N/A foreach (@_) {
1N/A my $suf = /\.(\w+)$/ ? $1 : '';
1N/A my $len = length;
1N/A my $suflen = length $suf;
1N/A $maxlen = $len if $len > $maxlen;
1N/A $maxsuflen = $suflen if $suflen > $maxsuflen;
1N/A }
1N/A # + 3 : we want three dots between the test name and the "ok"
1N/A return $maxlen + 3 - $maxsuflen;
1N/A}
1N/A
1N/A
1N/Asub _show_results {
1N/A my($tot, $failedtests) = @_;
1N/A
1N/A my $pct;
1N/A my $bonusmsg = _bonusmsg($tot);
1N/A
1N/A if (_all_ok($tot)) {
1N/A print "All tests successful$bonusmsg.\n";
1N/A } elsif (!$tot->{tests}){
1N/A die "FAILED--no tests were run for some reason.\n";
1N/A } elsif (!$tot->{max}) {
1N/A my $blurb = $tot->{tests}==1 ? "script" : "scripts";
1N/A die "FAILED--$tot->{tests} test $blurb could be run, ".
1N/A "alas--no output ever seen\n";
1N/A } else {
1N/A $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
1N/A my $percent_ok = 100*$tot->{ok}/$tot->{max};
1N/A my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
1N/A $tot->{max} - $tot->{ok}, $tot->{max},
1N/A $percent_ok;
1N/A
1N/A my($fmt_top, $fmt) = _create_fmts($failedtests);
1N/A
1N/A # Now write to formats
1N/A for my $script (sort keys %$failedtests) {
1N/A $Curtest = $failedtests->{$script};
1N/A write;
1N/A }
1N/A if ($tot->{bad}) {
1N/A $bonusmsg =~ s/^,\s*//;
1N/A print "$bonusmsg.\n" if $bonusmsg;
1N/A die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
1N/A "$subpct\n";
1N/A }
1N/A }
1N/A
1N/A printf("Files=%d, Tests=%d, %s\n",
1N/A $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
1N/A}
1N/A
1N/A
1N/Amy %Handlers = ();
1N/A$Strap->{callback} = sub {
1N/A my($self, $line, $type, $totals) = @_;
1N/A print $line if $Verbose;
1N/A
1N/A my $meth = $Handlers{$type};
1N/A $meth->($self, $line, $type, $totals) if $meth;
1N/A};
1N/A
1N/A
1N/A$Handlers{header} = sub {
1N/A my($self, $line, $type, $totals) = @_;
1N/A
1N/A warn "Test header seen more than once!\n" if $self->{_seen_header};
1N/A
1N/A $self->{_seen_header}++;
1N/A
1N/A warn "1..M can only appear at the beginning or end of tests\n"
1N/A if $totals->{seen} &&
1N/A $totals->{max} < $totals->{seen};
1N/A};
1N/A
1N/A$Handlers{test} = sub {
1N/A my($self, $line, $type, $totals) = @_;
1N/A
1N/A my $curr = $totals->{seen};
1N/A my $next = $self->{'next'};
1N/A my $max = $totals->{max};
1N/A my $detail = $totals->{details}[-1];
1N/A
1N/A if( $detail->{ok} ) {
1N/A _print_ml_less("ok $curr/$max");
1N/A
1N/A if( $detail->{type} eq 'skip' ) {
1N/A $totals->{skip_reason} = $detail->{reason}
1N/A unless defined $totals->{skip_reason};
1N/A $totals->{skip_reason} = 'various reasons'
1N/A if $totals->{skip_reason} ne $detail->{reason};
1N/A }
1N/A }
1N/A else {
1N/A _print_ml("NOK $curr");
1N/A }
1N/A
1N/A if( $curr > $next ) {
1N/A print "Test output counter mismatch [test $curr]\n";
1N/A }
1N/A elsif( $curr < $next ) {
1N/A print "Confused test output: test $curr answered after ".
1N/A "test ", $next - 1, "\n";
1N/A }
1N/A
1N/A};
1N/A
1N/A$Handlers{bailout} = sub {
1N/A my($self, $line, $type, $totals) = @_;
1N/A
1N/A die "FAILED--Further testing stopped" .
1N/A ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
1N/A};
1N/A
1N/A
1N/Asub _print_ml {
1N/A print join '', $ML, @_ if $ML;
1N/A}
1N/A
1N/A
1N/A# For slow connections, we save lots of bandwidth by printing only once
1N/A# per second.
1N/Asub _print_ml_less {
1N/A if( !$Ok_Slow || $Last_ML_Print != time ) {
1N/A _print_ml(@_);
1N/A $Last_ML_Print = time;
1N/A }
1N/A}
1N/A
1N/Asub _bonusmsg {
1N/A my($tot) = @_;
1N/A
1N/A my $bonusmsg = '';
1N/A $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
1N/A " UNEXPECTEDLY SUCCEEDED)")
1N/A if $tot->{bonus};
1N/A
1N/A if ($tot->{skipped}) {
1N/A $bonusmsg .= ", $tot->{skipped} test"
1N/A . ($tot->{skipped} != 1 ? 's' : '');
1N/A if ($tot->{sub_skipped}) {
1N/A $bonusmsg .= " and $tot->{sub_skipped} subtest"
1N/A . ($tot->{sub_skipped} != 1 ? 's' : '');
1N/A }
1N/A $bonusmsg .= ' skipped';
1N/A }
1N/A elsif ($tot->{sub_skipped}) {
1N/A $bonusmsg .= ", $tot->{sub_skipped} subtest"
1N/A . ($tot->{sub_skipped} != 1 ? 's' : '')
1N/A . " skipped";
1N/A }
1N/A
1N/A return $bonusmsg;
1N/A}
1N/A
1N/A# Test program go boom.
1N/Asub _dubious_return {
1N/A my($test, $tot, $estatus, $wstatus) = @_;
1N/A my ($failed, $canon, $percent) = ('??', '??');
1N/A
1N/A printf "$test->{ml}dubious\n\tTest returned status $estatus ".
1N/A "(wstat %d, 0x%x)\n",
1N/A $wstatus,$wstatus;
1N/A print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
1N/A
1N/A if (_corestatus($wstatus)) { # until we have a wait module
1N/A if ($Have_Devel_Corestack) {
1N/A Devel::CoreStack::stack($^X);
1N/A } else {
1N/A print "\ttest program seems to have generated a core\n";
1N/A }
1N/A }
1N/A
1N/A $tot->{bad}++;
1N/A
1N/A if ($test->{max}) {
1N/A if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
1N/A print "\tafter all the subtests completed successfully\n";
1N/A $percent = 0;
1N/A $failed = 0; # But we do not set $canon!
1N/A }
1N/A else {
1N/A push @{$test->{failed}}, $test->{'next'}..$test->{max};
1N/A $failed = @{$test->{failed}};
1N/A (my $txt, $canon) = _canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
1N/A $percent = 100*(scalar @{$test->{failed}})/$test->{max};
1N/A print "DIED. ",$txt;
1N/A }
1N/A }
1N/A
1N/A return { canon => $canon, max => $test->{max} || '??',
1N/A failed => $failed,
1N/A percent => $percent,
1N/A estat => $estatus, wstat => $wstatus,
1N/A };
1N/A}
1N/A
1N/A
1N/Asub _create_fmts {
1N/A my($failedtests) = @_;
1N/A
1N/A my $failed_str = "Failed Test";
1N/A my $middle_str = " Stat Wstat Total Fail Failed ";
1N/A my $list_str = "List of Failed";
1N/A
1N/A # Figure out our longest name string for formatting purposes.
1N/A my $max_namelen = length($failed_str);
1N/A foreach my $script (keys %$failedtests) {
1N/A my $namelen = length $failedtests->{$script}->{name};
1N/A $max_namelen = $namelen if $namelen > $max_namelen;
1N/A }
1N/A
1N/A my $list_len = $Columns - length($middle_str) - $max_namelen;
1N/A if ($list_len < length($list_str)) {
1N/A $list_len = length($list_str);
1N/A $max_namelen = $Columns - length($middle_str) - $list_len;
1N/A if ($max_namelen < length($failed_str)) {
1N/A $max_namelen = length($failed_str);
1N/A $Columns = $max_namelen + length($middle_str) + $list_len;
1N/A }
1N/A }
1N/A
1N/A my $fmt_top = "format STDOUT_TOP =\n"
1N/A . sprintf("%-${max_namelen}s", $failed_str)
1N/A . $middle_str
1N/A . $list_str . "\n"
1N/A . "-" x $Columns
1N/A . "\n.\n";
1N/A
1N/A my $fmt = "format STDOUT =\n"
1N/A . "@" . "<" x ($max_namelen - 1)
1N/A . " @>> @>>>> @>>>> @>>> ^##.##% "
1N/A . "^" . "<" x ($list_len - 1) . "\n"
1N/A . '{ $Curtest->{name}, $Curtest->{estat},'
1N/A . ' $Curtest->{wstat}, $Curtest->{max},'
1N/A . ' $Curtest->{failed}, $Curtest->{percent},'
1N/A . ' $Curtest->{canon}'
1N/A . "\n}\n"
1N/A . "~~" . " " x ($Columns - $list_len - 2) . "^"
1N/A . "<" x ($list_len - 1) . "\n"
1N/A . '$Curtest->{canon}'
1N/A . "\n.\n";
1N/A
1N/A eval $fmt_top;
1N/A die $@ if $@;
1N/A eval $fmt;
1N/A die $@ if $@;
1N/A
1N/A return($fmt_top, $fmt);
1N/A}
1N/A
1N/A{
1N/A my $tried_devel_corestack;
1N/A
1N/A sub _corestatus {
1N/A my($st) = @_;
1N/A
1N/A my $did_core;
1N/A eval { # we may not have a WCOREDUMP
1N/A local $^W = 0; # *.ph files are often *very* noisy
1N/A require 'wait.ph';
1N/A $did_core = WCOREDUMP($st);
1N/A };
1N/A if( $@ ) {
1N/A $did_core = $st & 0200;
1N/A }
1N/A
1N/A eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
1N/A unless $tried_devel_corestack++;
1N/A
1N/A return $did_core;
1N/A }
1N/A}
1N/A
1N/Asub _canonfailed ($$@) {
1N/A my($max,$skipped,@failed) = @_;
1N/A my %seen;
1N/A @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
1N/A my $failed = @failed;
1N/A my @result = ();
1N/A my @canon = ();
1N/A my $min;
1N/A my $last = $min = shift @failed;
1N/A my $canon;
1N/A if (@failed) {
1N/A for (@failed, $failed[-1]) { # don't forget the last one
1N/A if ($_ > $last+1 || $_ == $last) {
1N/A if ($min == $last) {
1N/A push @canon, $last;
1N/A } else {
1N/A push @canon, "$min-$last";
1N/A }
1N/A $min = $_;
1N/A }
1N/A $last = $_;
1N/A }
1N/A local $" = ", ";
1N/A push @result, "FAILED tests @canon\n";
1N/A $canon = join ' ', @canon;
1N/A } else {
1N/A push @result, "FAILED test $last\n";
1N/A $canon = $last;
1N/A }
1N/A
1N/A push @result, "\tFailed $failed/$max tests, ";
1N/A if ($max) {
1N/A push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
1N/A } else {
1N/A push @result, "?% okay";
1N/A }
1N/A my $ender = 's' x ($skipped > 1);
1N/A my $good = $max - $failed - $skipped;
1N/A if ($skipped) {
1N/A my $skipmsg = " (less $skipped skipped test$ender: $good okay, ";
1N/A if ($max) {
1N/A my $goodper = sprintf("%.2f",100*($good/$max));
1N/A $skipmsg .= "$goodper%)";
1N/A } else {
1N/A $skipmsg .= "?%)";
1N/A }
1N/A push @result, $skipmsg;
1N/A }
1N/A push @result, "\n";
1N/A my $txt = join "", @result;
1N/A ($txt, $canon);
1N/A}
1N/A
1N/A=end _private
1N/A
1N/A=back
1N/A
1N/A=cut
1N/A
1N/A
1N/A1;
1N/A__END__
1N/A
1N/A
1N/A=head1 EXPORT
1N/A
1N/AC<&runtests> is exported by Test::Harness by default.
1N/A
1N/AC<$verbose>, C<$switches> and C<$debug> are exported upon request.
1N/A
1N/A=head1 DIAGNOSTICS
1N/A
1N/A=over 4
1N/A
1N/A=item C<All tests successful.\nFiles=%d, Tests=%d, %s>
1N/A
1N/AIf all tests are successful some statistics about the performance are
1N/Aprinted.
1N/A
1N/A=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
1N/A
1N/AFor any single script that has failing subtests statistics like the
1N/Aabove are printed.
1N/A
1N/A=item C<Test returned status %d (wstat %d)>
1N/A
1N/AScripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
1N/Aand C<$?> are printed in a message similar to the above.
1N/A
1N/A=item C<Failed 1 test, %.2f%% okay. %s>
1N/A
1N/A=item C<Failed %d/%d tests, %.2f%% okay. %s>
1N/A
1N/AIf not all tests were successful, the script dies with one of the
1N/Aabove messages.
1N/A
1N/A=item C<FAILED--Further testing stopped: %s>
1N/A
1N/AIf a single subtest decides that further testing will not make sense,
1N/Athe script dies with this message.
1N/A
1N/A=back
1N/A
1N/A=head1 ENVIRONMENT
1N/A
1N/A=over 4
1N/A
1N/A=item C<HARNESS_ACTIVE>
1N/A
1N/AHarness sets this before executing the individual tests. This allows
1N/Athe tests to determine if they are being executed through the harness
1N/Aor by any other means.
1N/A
1N/A=item C<HARNESS_COLUMNS>
1N/A
1N/AThis value will be used for the width of the terminal. If it is not
1N/Aset then it will default to C<COLUMNS>. If this is not set, it will
1N/Adefault to 80. Note that users of Bourne-sh based shells will need to
1N/AC<export COLUMNS> for this module to use that variable.
1N/A
1N/A=item C<HARNESS_COMPILE_TEST>
1N/A
1N/AWhen true it will make harness attempt to compile the test using
1N/AC<perlcc> before running it.
1N/A
1N/AB<NOTE> This currently only works when sitting in the perl source
1N/Adirectory!
1N/A
1N/A=item C<HARNESS_DEBUG>
1N/A
1N/AIf true, Test::Harness will print debugging information about itself as
1N/Ait runs the tests. This is different from C<HARNESS_VERBOSE>, which prints
1N/Athe output from the test being run. Setting C<$Test::Harness::Debug> will
1N/Aoverride this, or you can use the C<-d> switch in the F<prove> utility.
1N/A
1N/A=item C<HARNESS_FILELEAK_IN_DIR>
1N/A
1N/AWhen set to the name of a directory, harness will check after each
1N/Atest whether new files appeared in that directory, and report them as
1N/A
1N/A LEAKED FILES: scr.tmp 0 my.db
1N/A
1N/AIf relative, directory name is with respect to the current directory at
1N/Athe moment runtests() was called. Putting absolute path into
1N/AC<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
1N/A
1N/A=item C<HARNESS_IGNORE_EXITCODE>
1N/A
1N/AMakes harness ignore the exit status of child processes when defined.
1N/A
1N/A=item C<HARNESS_NOTTY>
1N/A
1N/AWhen set to a true value, forces it to behave as though STDOUT were
1N/Anot a console. You may need to set this if you don't want harness to
1N/Aoutput more frequent progress messages using carriage returns. Some
1N/Aconsoles may not handle carriage returns properly (which results in a
1N/Asomewhat messy output).
1N/A
1N/A=item C<HARNESS_OK_SLOW>
1N/A
1N/AIf true, the C<ok> messages are printed out only every second. This
1N/Areduces output and may help increase testing speed over slow
1N/Aconnections, or with very large numbers of tests.
1N/A
1N/A=item C<HARNESS_PERL>
1N/A
1N/AUsually your tests will be run by C<$^X>, the currently-executing Perl.
1N/AHowever, you may want to have it run by a different executable, such as
1N/Aa threading perl, or a different version.
1N/A
1N/AIf you're using the F<prove> utility, you can use the C<--perl> switch.
1N/A
1N/A=item C<HARNESS_PERL_SWITCHES>
1N/A
1N/AIts value will be prepended to the switches used to invoke perl on
1N/Aeach test. For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
1N/Arun all tests with all warnings enabled.
1N/A
1N/A=item C<HARNESS_VERBOSE>
1N/A
1N/AIf true, Test::Harness will output the verbose results of running
1N/Aits tests. Setting C<$Test::Harness::verbose> will override this,
1N/Aor you can use the C<-v> switch in the F<prove> utility.
1N/A
1N/A=back
1N/A
1N/A=head1 EXAMPLE
1N/A
1N/AHere's how Test::Harness tests itself
1N/A
1N/A $ cd ~/src/devel/Test-Harness
1N/A $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1N/A $verbose=0; runtests @ARGV;' t/*.t
1N/A Using /home/schwern/src/devel/Test-Harness/blib
1N/A t/base..............ok
1N/A t/nonumbers.........ok
1N/A t/ok................ok
1N/A t/test-harness......ok
1N/A All tests successful.
1N/A Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
1N/A
1N/A=head1 SEE ALSO
1N/A
1N/AL<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1N/Athe underlying timing routines, L<Devel::CoreStack> to generate core
1N/Adumps from failed tests and L<Devel::Cover> for test coverage
1N/Aanalysis.
1N/A
1N/A=head1 AUTHORS
1N/A
1N/AEither Tim Bunce or Andreas Koenig, we don't know. What we know for
1N/Asure is, that it was inspired by Larry Wall's TEST script that came
1N/Awith perl distributions for ages. Numerous anonymous contributors
1N/Aexist. Andreas Koenig held the torch for many years, and then
1N/AMichael G Schwern.
1N/A
1N/ACurrent maintainer is Andy Lester C<< <andy@petdance.com> >>.
1N/A
1N/A=head1 LICENSE
1N/A
1N/AThis program is free software; you can redistribute it and/or
1N/Amodify it under the same terms as Perl itself.
1N/A
1N/ASee L<http://www.perl.com/perl/misc/Artistic.html>
1N/A
1N/A=head1 TODO
1N/A
1N/AProvide a way of running tests quietly (ie. no printing) for automated
1N/Avalidation of tests. This will probably take the form of a version
1N/Aof runtests() which rather than printing its output returns raw data
1N/Aon the state of the tests. (Partially done in Test::Harness::Straps)
1N/A
1N/ADocument the format.
1N/A
1N/AFix HARNESS_COMPILE_TEST without breaking its core usage.
1N/A
1N/AFigure a way to report test names in the failure summary.
1N/A
1N/ARework the test summary so long test names are not truncated as badly.
1N/A(Partially done with new skip test styles)
1N/A
1N/ADeal with VMS's "not \nok 4\n" mistake.
1N/A
1N/AAdd option for coverage analysis.
1N/A
1N/ATrap STDERR.
1N/A
1N/AImplement Straps total_results()
1N/A
1N/ARemember exit code
1N/A
1N/ACompletely redo the print summary code.
1N/A
1N/AImplement Straps callbacks. (experimentally implemented)
1N/A
1N/AStraps->analyze_file() not taint clean, don't know if it can be
1N/A
1N/AFix that damned VMS nit.
1N/A
1N/AHARNESS_TODOFAIL to display TODO failures
1N/A
1N/AAdd a test for verbose.
1N/A
1N/AChange internal list of test results to a hash.
1N/A
1N/AFix stats display when there's an overrun.
1N/A
1N/AFix so perls with spaces in the filename work.
1N/A
1N/A=for _private
1N/A
1N/AKeeping whittling away at _run_all_tests()
1N/A
1N/A=for _private
1N/A
1N/AClean up how the summary is printed. Get rid of those damned formats.
1N/A
1N/A=head1 BUGS
1N/A
1N/AHARNESS_COMPILE_TEST currently assumes it's run from the Perl source
1N/Adirectory.
1N/A
1N/APlease use the CPAN bug ticketing system at L<http://rt.cpan.org/>.
1N/AYou can also mail bugs, fixes and enhancements to
1N/AC<< <bug-test-harness@rt.cpan.org> >>.
1N/A
1N/A=head1 AUTHORS
1N/A
1N/AOriginal code by Michael G Schwern, maintained by Andy Lester.
1N/A
1N/A=head1 COPYRIGHT
1N/A
1N/ACopyright 2003 by Michael G Schwern C<< <schwern@pobox.com> >>,
1N/A Andy Lester C<< <andy@petdance.com> >>.
1N/A
1N/AThis program is free software; you can redistribute it and/or
1N/Amodify it under the same terms as Perl itself.
1N/A
1N/ASee L<http://www.perl.com/perl/misc/Artistic.html>.
1N/A
1N/A=cut