MINUTES OF BCS FORTRAN SPECIALIST GROUP MEETING
HELD AT BCS HQ, 13 MANSFIELD STREET, LONDON
ON l7 MARCH 1986
Present: Stephen Brazier - Prime Computer
Chris Cartledge - University of Salford
F E Cox - Swifte Computer Systems
Dennis Davis - Computer Unit, Bath University
J L Dyke - HRC
Bill Flexner - Retired
Mike Geary - NAG
Bob Hatfield - University of Nottingham
Carole Hewlett - LSE
J Peter Holland - SS(E)L
Chris Lazou - ULCC
Heather Liddell - QMC (Computer Science)
C T Little - UK Met Office
Alan Macdonald - SSF
Clive Massey - SWURCC
Keith Normington - Coventry Polytechnic
Mike Nunn - CCTA
D Parkinson - QMC
Mick Pont - NAG
T L Van Raalte - MOD (PE)
Nick Saville - Private Consultant
Lawrie Schonfelder - Liverpool University
John Steel - QMC
John Stratford - QMC CC
Steve Watkins - UMIST
Alan Wilson - ICL
Helen Wright - York University
John Young - MOD (PE)
Chairman: John Wilson
University of Leicester
LEICESTER LE1 7RH
Tel: 0533 554455 ext 9137
Vice-Chairman: Keith Normington
Coventry (Lanchester) Polytechnic
COVENTRY CV1 5FB
Secretary: Mike Nunn
LONDON SWlP 4RT
Treasurer: John Dyke
Huntingdon Research Centre
HUNTINGDON PEl8 6ES
1. Apologies for Absence
Apologies were received from John Wilson who was on holiday abroad.
In his absence the Vice-Chairman, Keith Normington, took the chair.
2. Minutes of Previous Meeting [9 December 1985]
Correction: The post code of the Treasurer, John Dyke, should be
3. Matters Arising
3.1 The Group had responded to the letter received from
G Aharonian in the USA, recommending "killing off FORTRAN" -
see appendix to February newsletter.
3.2 It was hoped that it might prove possible for the Group
to visit the European Medium Range Weather Centre for the
4. BCS Business
4.l Minutes had been received from the recent meeting of the
Netherlands FORTRAN Group. These were circulated amongst the
4.2 BCS had held a one-day conference on 'Standards' in London
on 28 February. It was aimed at those representing BCS on
standards making bodies.
4.3 There was some discussion on whether the Group should
purchase a micro for administration purposes, but it was decided
not to do so whilst the present committee members already had
access to the facilities they needed on an ex-gratia basis at
their employing organisations.
4.4 A new BCS Specialist Group on Parallel Processing had
recently been set up - details are available from BCS HQ.
4.5 It was decided that our Group should develop a membership
application form based on ones already in use by other Specialist
Groups. This would help our administration and the collection of
annual dues from non-BCS members.
5.1 There were no alternative nominations so the incumbent committee
members were re-elected unanimously. The committee comprises:
Chairman: John Wilson (Leicester University)
Vice-Chairman: Keith Normington (Coventry Polytechnic)
Secretary: Mike Nunn (CCTA)
Treasurer: John Dyke (Huntingdon Research Centre)
The Treasurer talked through the Group's statement of accounts for
1985/86 - see appendix A. The main expenditure had been £251.89
comprising the cost of the newsletter and holding quarterly meetings
at BCS HQ. The Group had made a profit from "Fortran Forum" and had
opened a deposit account to take advantage of this. It would be used
to help set up a further "Forum" when Fortran 8X is released for public
5.2 BCS had agreed a £500 budget for the Fortran Group for the
coming year. It was decided that at our September meeting we should
consider framing a request for BCS to increase this.
5.3 Dates of future meetings:
Following a recent census conducted by the Chairman, it was found
that Thursday was the most favoured day for Group meetings. Therefore,
as an experiment, all Fortran meetings during the coming l2 months
will take place on this day (with the exception of the coming meeting
on 25th July).
25 July, 1986 - BCS HQ
16 October, 1986 - Visit to European Medium Range
11 December, 1986 - Coventry Polytechnic
19 March, 1987 - BCS HQ (including AGM)
(These dates have had to be adjusted since the March meeting.)
5.4 Fortran Forum:
It is intended to hold another "Fortran Forum" soon after '8X' is
released for public comment. Likely date would be February or
March 1987 and suggested venues are either Geological Society,
Burlington House or Royal Society, Carlton House Terrace. [Alternative
suggestions should be communicated to the Secretary].
6. X3J3 Progress
Lawrie Schonfelder (Liverpool University) who is a member of ANSI X3J3
meetings gave the following account of recent progress:
6.1 The January X3J3 meeting in Sunnyvale had been mostly devoted
to editorial activity on the '8X' draft document. The parent X3
committee holds the responsibility for deciding whether the draft is
in a fit state to release for public comment.
6.2 There was a lot of discussion on the following features:
- REAL (*,*)
- bit masking in WHERE statement
- depreciated features [the intention is that these will be removed
from Fortran 9X unless there are special
reasons not to do so; but this is not
binding on future X3J3 committees].
6.3 There was endless debate on the size of the new Fortran
language - latterly IBM have been particularly opposed to this and
want to throw away a large part of '8X'. In fact, there now seems
to be something of a divide between vendors and users on the X3J3
committee. One problem is that some companies have already extended
their Fortran implementations in directions they thought '8X' was
going, but now find some of their extensions will neither be in '8X'
nor compatible with it. At present it does not seem that a vote
within X3J3 would achieve the necessary two thirds "yes" in favour
of releasing the draft for public comment. (Anyone voting "no" must,
however, give alternative technical proposals for features opposed).
6.4 The latest version of the draft is the S8 "blue book" document.
6.5 N.b. Since the March BCS Fortran Specialist Group meeting
a further X3J3 meeting has taken place at Scranton in April.
John Reid (Harwell) has kindly written a report on the proceedings
- see appendix B.
7. Floating Point Accuracy and Numerical Precision in Fortran
7.1 In the afternoon there were three individual talks:-
"FPV - a floating point validation package" - Mick Pont (NAG)
"A user's experience with the NAG floating
point verifier" - Chris Cartledge
"Fortran 8X - portable control of floating
point precision" - Lawrie Schonfelder
7.2 FPV - a floating point validation package - Mick Pont (NAG)
One of the reasons that has initiated work on floating point test
packages has been discovery of situations in some supplier
implementations of either total inaccuracy or considerably less
accuracy than expected, eg, occasional double precision operations
which are in fact only accurate to single precision.
Mick believes those having a requirement for a package of floating
point tests include:-
- manufacturers of floating point hardware
- developers of floating point software/microcode
- managers of scientific computing installations
- implementors of scientific software, eg, NAG.
Amongst previous work in this field, which has heavily influenced
the development of FPV, has been by N L Schryer of AT & T
Bell Laboratories with FPTST program (12000 lines of Fortran first
published in l98l). Brian Wichmann (NPL) looked at the Schryer
routine, liked it and decided to develop something smaller based
on it which would be suitable for testing a wider range of
machines. So it came to be that FPV was developed in collaboration
between NAG and NPL with joint funding.
A detailed summary of the features of FPV appears in appendix C.
For further information on this product and its availability,
please contact NAG. Their address is:-
256 Banbury Road
OXFORD OX2 7DE
7.3 A user's experience with the NAG floating point verifier -
Chris Cartledge (Salford University)
Chris Cartledge works in the Computing Services Section at
Salford University. When NAG advertised the availability of FPV
last year, Chris approached them and acquired a copy of the test
suite. The reason Salford wanted it was that at the time they had
an ICL l904S which offered 48 bit single precision, but were going
to buy a replacement system and wondered about the transition
effects if the new system happened to be 32 bit. Also, Salford
had an interest in writing compilers for Prime systems, were having
problems with getting accurate results on intrinsic functions and
the reasons were not immediately obvious.
Chris ran FPV on Prime Systems and the IBM PC. He found some
discrepancies in results produced by different Prime machines
- a full report on his findings appears in appendix D. On the
IBM PC the arithmetic seemed pretty near perfect with the
exception of an error found in double precision square root.
Chris obtained FPV from NAG on a magnetic tape. It is written
in Fortran and he found it well documented and easy to compile
and run. In summary about FPV, he would say:-
- It does find bugs in floating point;
- interpretation of results needs care;
- comparing results with sums done by hand is difficult;
- FPV still contains a few bugs (but fewer than machines
- as supplied, it does not test overflow and underflow
(but can be amended to do so).
Altogether, whilst FPV is not perfect,it is well documented,
usable and supported.
7.4 Fortran 8X - Portable control of floating point precisign -
Lawrie Schonfelder (Liverpool University)
Brian Smith, one of the X3J3 committee members, decided that there
were certain problems with floating point in Fortran 77, eg,
precision of approximations and processor dependence of number
ranges. In 8X however, a processor must select whatever precision
the user requires. Should the processor not be able to meet the
user requirement, it must report the fact - it is not sufficient
just to generate the best the processor can achieve at compile time.
A copy of Lawrie's slides describing how 8X handles floating
point precision appears in appendix E.
8. Date of Next Meeting
The next meeting of the BCS Fortran Specialist Group will take place on
Friday, 25 July 1986 from 10.30 am to 4.30 pm at BCS HQ. In the afternoon
starting at 2.00 pm, there will be a talk by George Mozdzynski (CDL) on
the new ETA super computer. (Please note that the meeting taking place
at European Medium Range Weather Centre has had to be postponed until
9 June 1986
BCS Fortran Specialist Group - Income and Expenditure - as at 7.3.86.
BCS grant to offset charges at HQ 231.89
Subscriptions from non-BCS members 5.00
Balance requested from HQ to cover external expenditure 90.18
Total income 327.07
Mailing of agenda and minutes:
Computer costs - labels
Stationery costs - envelopes
ANSI X3J3 Observers fee
BCS Fortran Specialist Group - Balance - as at 7.3.86
Opening balance - 1.5.85 :
Account A (current account) 1284.40
Account B (deposit account) 0.00
Closing balance - 7.3.86 :
Account A (current account) 294.22
Account H (deposit account) 900.00
Balance requested from HQ to cover external expenditure 90.18
To: BCS, NAG, etc.
From: John Reid
Date: 29 April 1986
Subject: Report on X3J3 meeting at Scranton, 7-11 April 1986
Note: This is a personal report of the meeting and in no sense
does it constitute an official record of it.
The letter ballot of X3J3 members, which asked whether the draft
standard (S8) is ready to be released for public comment, closed just
before the meeting. The result was: Yes (many with comments)-16,
No-20, and one member did not vote. The rules require anyone who
votes 'no' to say why and for the committee to seek changes that
resolve the objections. It is also required to consider the comments
accompanying 'yes' votes. The 'no' votes were based on the deprecated
features, the size of the language, and the quality of the document.
A 'language integration' subgroup was formed to seek a compromise that
would resolve all the 'no' votes, and more than half the committee
elected to serve on it. Despite the large number of 'no' votes, it
was not an acrimonious discussion, but rather a genuine attempt to
resolve differences of opinion and a friendly spirit pervaded. For
the deprecated features, a solution was obtained (see Section 2) that
satisfied all but the IBM member. On the size of the language, it was
decided to move a substantial subset to an appendix (see Section 3).
Since about as many members felt that too much had been proposed for
removal as thought that too little had been proposed, a compromise
size has obviously been found. It was agreed that the approach should
be worked on. Eventually at least half of the members must vote 'yes'
and at least two-thirds of those voting must vote 'yes'. Whether
enough members who are dissatisfied with the compromise will vote
'yes' eventually remains to be seen. There was no disagreement that
much editorial work is still needed on the document and those members
not on the language integration group worked steadily through most of
the hundreds of editorial changes suggested.
2. Deprecated features
A significant number of 'no' ballots, including mine, cited the
deprecated features. After discussion during the week, the issue was
settled to everyone's satisfaction except Dick Weaver (IBM). The
deprecated features will be divided into two sets. A few features
(tentatively: arithmetic IF, noninteger DO indices, DO loop
termination other than on a CONTINUE or END DO statement, nested DO
loops with a common terminal statement, and branching to an END IF
statement from outside its IF block), all of which have replacements
in Fortran 77, are to be called 'obsolescent'. It will be recommended
that all Fortran 8x features not labelled 'obsolescent' be retained in
Fortran 9x, and that an obsolescent feature be removed from Fortran 9x
only if its use has become rare. The rest of the deprecated features
will still be called 'deprecated', users will be recommended to avoid
using them, and they may be placed in the Fortran 9x obsolescent list,
for possible removal in the standard that follows Fortran 9x. Dick
Weaver (IBM) was opposed to having any obsolescent features at all, in
order to protect the investment in codes. My view is that it is
essential to have some means for the language to evolve, though the
pace cannot be rapid. Kevin Harris (DEC) will be making a formal
proposal to the next meeting.
3. Size of the language
Many of the 'no' ballots were based on the language being
regarded as too big, and it was agreed that the language integration
subgroup should work on a proposal from Dick Hendrickson (Cray) that
involves moving parts of the language to an appendix. The compromise
'straw man' that was eventually formulated was as follows:
l. Add a new appendix: Extension Features
2. Separate the two forms of DO
3. Remove significant blanks and insignificant underscores
4. Move to appendix:
structure array of arrays treated as a higher order array
array constructors and structure constructors
some intrinsic functions
nesting of internal procedures
passing internal procedures as actual arguments
5. Simplify internal procedures:
single level, inherits host's environment
no USE on host
may not contain ENTRY statements
6. Simplify Module/Use:
modules: define a shared data environment
use: applied only to modules; only options are ONLY
7. Fix Real(*,*):
argument list may contain one Real(*,*) argument
effective precision intrinsics may be used to
8. Add pointers to the appendix
9. Replace name-directed I/O with "standard" NAMELIST
10. Add storage mapping
My personal view on this is that it is unacceptable on two counts:
(a) Passing internal procedures as actual arguments is easy to
implement in the nonrecursive case and is very important for
numerical libraries. If a user wishes to calculate
∫ f(x)dx, the library code may require a function F(X). If F(X)
is written as an external function, it is awkward to access the
other data on which the function depends (impossible in Fortran
77 when it includes an array of variable size). If F(X) is
written as an internal function, the whole data environment of
the point of call of the library procedure is available
(b) Removing user-defined operators and operator overloading
means that we no longer have derived types, but have only data
structures. It will mean, for instance, that we have an array
facility that does not include matrix multiplication. The
language will no longer be extendable. It will not be possible
to write modules for matrices, multiple precision, rational
arithmetic, interval arithmetic, variable-length strings, set
operations, vector algebra, etc.
However, I do approve of some of the changes:
(c) arrays of arrays really are different from arrays.
(d) the IDENTIFY statement may seriously inhibit optimization
and allows many-one arrays (arrays with more than one
element using the same storage location).
(e) vector-valued subscripts also allow many-one arrays.
(f) simplification of internal procedures and the USE statement.
(g) fixing real (*,*).
Mike Metcalf (CERN) suggested that the small font currently used for
deprecated features should instead be used for the appendix material
and the obsolescent features (of which there are few) be marked in
some other way. I would find this totally unacceptable for the final
standard, but it would enormously ease the task of getting a draft out
for public comment soon.
4. Fortran binding to the PHIGS graphics standard
The Fortran binding to the PHIGS graphics standard was approved.
5. Real(*,*) combinatorial explosion
Lawrie Schonfelder presented a proposal that limits the
real(*,*) combinatorial explosion (very large numbers of versions of a
procedure being needed for all combinations of precisions of its
real(*,*) arguments) but does not prevent it. There being clear
sentiment that total prevention is wanted, he did not move his
proposal. A straw vote (l6-6-8) favoured having the ability to link
one precision to another and the language integration group favoured
removing the combinatorial explosion by limiting the number of
real(*,*) arguments a procedure may have to one. I promised to bring
a proposal to the next meeting.
6. Random-number generator
Carl Burch proposed adding an intrinsic function for generating
random numbers, following a request from the ISO/WG5 meeting in Bonn
last year. A straw vote favoured having such a facility, but several
members felt that it should be an intrinsic subroutine.
FPV - A FLOATING-POINT VALIDATION PACKAGE (Status report: September 1985)
Summary of Features
FPV is a software package for validating an implementation of
floating-point arithmetic. It is primarily intended to check for design
errors in floating-point arithmetic, but may also be used to check for
intermittent errors (caused by a transient malfunction in the hardware).
By 'validation' we mean simply an experimental verification that
floating-point arithmetic has been correctly implemented according to its
specification. FPV must be supplied with the essential parameters of the
specification - base, precision, exponent range, and rounding rule - and
attempts to verify that the arithmetic conforms to these parameters by
probing for errors as best we know how. Different implementations of
arithmetic may pass the tests according to more or less stringent criteria:
the 'best' implementations (such as those that conform to the IEEE
standard) will satisfy the most stringent criteria.
In order to facilitate testing in as wide a variety of environments as
possible, FPV allows the testing procedure to be split into two phases. In
'two-phase' mode, one program FPVGEN generates a file of test data; a
second program FPVTGT, usually running on a different machine, reads the
file and performs the tests.
| Machine A | | Machine B |
| - - - - - - - | | - - - - - - - |
| | | | | | | |
| | FPVGEN | | -->- Data -->- | | FPVTGT | |
| | | | file | | | |
| - - - - - - - | | - - - - - - - |
The program FPVTGT is comparatively short and simple, and is not difficult
to adapt to different environments (even if this involves translating it
into a different language). FPV can also operate in 'all-in-one' mode, in
which the program FPVGEN does not write a file, but immediately performs
the tests itself, all on one machine.
The programs FPVGEN and FPVTGT are currently written in both standard
Fortran 77 and ISO standard Pascal, level l. They therefore require a
suitable compiler to be available and they test the arithmetic as 'seen'
through those languages. A few machine-specific modifications may be needed
to make the programs completely robust. In order to test arithmetic on
machines which do not have a Fortran or Pascal compiler, or to test
arithmetic as seen through a different language (e.g. Basic, Ada), it is
necessary to translate all or part of the program FPVTGT into a suitable
FPV allows arbitrary values for the base, precision and range of
floating-point numbers (though currently the base must not exceed 16). FPV
can test the following floating-point operators:
addition and subtraction (x +or- y)
absolute value (|x|)
square root (√x)
comparisons (x=y, x≠y, x<y, x>y, x≤y, x≥y)
FPV can either test that the results are correct according to one of a
choice of commonly used rounding rules; or, if the rounding rule is unknown
or not one of those provided, it can test that the results lie within the
narrow bounds defined in the model of floating-point arithmetic developed
by W.S. Brown (the 'Brown model'). FPV can also be used to test whether the
overflow and underflow flags are set correctly, though for this purpose
machine-specific modifications must be made to the programs. FPV does not
test mixed-precision operations, nor does it test conversion between
floating-point numbers and integers or decimal strings.
Because the number of combinations of floating-point operands on any
realistic computer is enormous (e.g. of the order of l0^l8 or more), any
testing must be extremely selective. The selection strategy used by FPV has
been demonstrated to be effective in practice, but there is no guarantee
that errors might not exist which cannot be detected by FPV. Moreover the
stringency of the testing performed by FPV is under the control of the
user. A user can select a subset of the tests of which FPV is capable;
indeed he will normally wish to do so to ensure that the tests can be
completed in a reasonable length of time, or sometimes to focus attention
on a particular feature that is suspect. Effective use of FPV is the user's
responsibility and requires a reasonable degree of understanding. The FPV
User's Guide aims to explain the necessary background and to give suitable
FPV has already found errors in hardware or software implementations of
floating-point arithmetic on the following machines: DEC VAX-11, Cray-l,
ICL 2988, Apollo 300, CDC Cyber 205.
Authorship and Acknowledgements
FPV was developed by NAG Ltd, in collaboration with Dr. B.A.Wichmann of
NPL, under a contract jointly funded by NAG Ltd and the Department of Trade
and Industry. FPV is based on ideas used by N.L.Schryer of A.T.&T. Bell
Laboratories in a program FPTST. We are grateful to Dr. Schryer for his
advice and encouragement.
Development of FPV is essentially complete, and the package will be made
available when licensing agreements with other parties have been finalised.
An announcement concerning availability will be made in the last quarter of
1985; details will be sent to all NAG Newsletter recipients and in response
to all enquiries received about FPV. In the meantime, if you wish to
receive more information about FPV. please contact NAG.
An investigation of Primes floating-point arithmetic
C J Cartledge, Computing Services, University of Salford
Floating-point arithmetic is the hardware (occasionally software) on a
computer system which performs arithmetic operations on quantities
declared as REAL or DOUBLE PRECISION in a Fortran program (or real in
a Pascal program).
At the time of Salford's recent computer replacement exercise, some
users were concerned about the floating-point precision of the
short-listed computers. This was largely caused by the move from the
ICL 2960 which used (48 bits to represent single precision
floating-point quantities, and 96 bits for double precision ones. By
contrast, the machines on the short-list (with one exception) had 32
bit single and 64 bit double precision representations.
With this in mind, CSS took an early version of FPV (Floating-Point
Verifier) from NAG Ltd in order to evaluate the floating-point system
of the Prime. This software models floating-point operations using
integers and tests the results obtained from the model against those
actually obtained on the machine. It tests the major floating-point
operations, * + - / and square root, absolute value, and comparison.
A vast number of pairs of floating-point operands exist, so that
testing all of them is not possible. However, certain operand types
are known to cause problems, and a selection of these were tested.
FPV has been developed at NAG by J J Du Croz and M W Pont ,
following the pioneering work of Schryer  in testing floating-point
operations and the development of a realistic model for floating-point
by Brown .
On the Prime machines around 250 000 operand pairs were tested across
all the operators, in both single and double precision on Prime 9955
and Prime 750 machines. In addition more limited tests were performed
on Prime 9950 (which appears to behave the same way as the 9955), and
on the Prime 2250. There are other Prime machines on Campus, in
particular 2550 and 550 machines, but these have yet to be tested.
2 Single Precision
In single precision, tests were done to check that the machines gave
less than one bit error on their 23 bit accuracy. All the machines
performed correctly, so they correctly calculate to slightly less than
7 (about 6.9) digit precision with a range from approximately ±1.0E-38
For operand pairs where there is no exact answer, different machines
in the range may give different answers. The answers on different
machines to a single operation will never differ by more than one
least significant bit.
Prime machines hold a single precision quantity in registers with more
precision than it is held in memory. This can cause the following
program to print Hello.
This is quite a common feature of floating-point processors. Indeed
it can even happen on a processor following the recent IEEE Floating
Point Standard . The effect is not, as is often thought, that the
processor can get different answers on different occasions for the
same floating-point operation. A processor that does get different
answers for the same sum on different occasions has a fault! Instead
it is due to the fact the answer in a register may be held with more
precision than a number in store. It is actually the process of
storing the answer that causes a loss of precision!
3 Double precision
The double precision results were more interesting. The
floating-point format of the Prime machines represents double
precision numbers to an accuracy of 47 bits. Prime documentation
states that operations on 750 and 9955 processors are of full 47 bit
accuracy. However, on both processors, subtraction gave answers a
full bit in error and the processors are not even compatible, the 750
giving different answers from the 9955. The following program
DOUBLE PRECISION X,Y
* Set X=140737488355328D0
DO 5 1=1,47
PRINT*,' X X+1 (X+1)-1'
DO 10 I=1,4
Y = X+1
X = X+1
Running the program with Prime's F77 compiler on a 9955 produces the
following output (in its current release, FTN77's decimal output is
not sufficiently accurate to show the effect).
X X+1 (X+1)-1
140737488355324.00 14073748835532S.00 140737488355324.00
140737488355325.00 140737488355326.00 140737488355325.D0
140737488355326.00 140737488355327.00 140737488355326.00
140737488355327.00 140737488355328.00 1407374883S5326.00
The third column of the output should always be the same as the same
as the first. On the bottom line this can be seen not to be the case.
The 750 gives the following slightly different results, that are again
wrong. The bottom right number is different.
X X+1 (X+1)-1
14073748835S324.00 1407374883S5325.00 140737488355324.00
140737488355325.00 14073748835S326.00 140737488355325.00
1407374883SS326.00 140737488355327.00 14073748835S326.00
1407374883S5327.00 140737488355328.00 140737488355328.00
The problem has been reported to Prime. For the time being the Prime
9955 and Prime 750 should be regarded as being accurate only to 46
bits: slightly less than 14 (about 13.8) digits. The exponent range
of double precision numbers on the Prime is enormous and it is
recommended that users stay within the generous range recommended in
Prime's F77 manual: ±1.0D-9902 to ±1.0D+9825.
One actual error has been found in double precision, but only on the
Prime 9955 machines. Under certain circumstances, in double
precision, the machine gets answers that are wrong in the least
significant bit. The following rather artificial program causes
integer overflow to demonstrate the problem.
DOUBLE PRECISION R2,RM2,RMZ2
* Non-standard declaration to force 32 bit integers
It prints Hello even though both 2 and -2 have exact floating-point
representations so the minus operation should give an exact answer.
Clearly, since the largest permitted INTEGER*4 is about 2.0E9, the
statement J=I*I*I will cause integer overflow (a condition which is
not normally detected on the Prime, although it FTN77's -CHECK option
does detect it). Removing this overflow also removes the inaccuracy
The effect occurs under other unknown circumstances and can be cleared
by inserting code between the integer overflow and the statement which
follows it. What exactly causes the effect to be cleared is not
known. Prime have been informed of the problem and a microcode (ie,
hardware) fix is expected with the next release for the 9955 machines.
There are some oddities with floating-point arithmetic on the Prime
and the documentation is both incomplete and inaccurate.
A hardware (ie, microcode) fix to correct the disturbing error is
It may be of little comfort to concerned users to assure them that
machines from other manufacturers suffer similar problems, and in some
cases even worse ones. It is also true to say that some other
manufacturers are better.
It would appear the Prime are claiming full compatibility of
as floating-point operations across their latest machines. It is to be
hoped that the machines live up to the promise. He intend to keep the
situation under review and report future developments.
 FPV Floating-Point Validation Package, Version 1, User's Guide, JJ
Du Croz and MW Pont, Numerical Algorithms Group Ltd, June 1985.
 A Test of a Computer's Floating-Point Arithmetic Unit, NL Schryer,
Bell Laboratories Computing Science Technical Report No. 89, February
 A Simple but Realistic Model of Floating-Point Computation, WS
Brown, ACM Transactions on Mathematical Software, Vol. 7, No 4,
December 1981, pp. 455-480.
 A Proposed Standard for Binary Floating-Point Arithmetic (Draft
8.0 of IEEE Task P754) in IEEE COMPUTER (1981).
The REAL Data Type
Lawrie Schonfelder, University of Liverpool
[The slides have been transcribed from the original manuscript. To improve
legibility, subscripts are written as _aaa, e.g. f1 appears as f_1]
Problems with floating point data in F77
i) Two distinct data types defined (one only for COMPLEX)
ii) Precision of approximation processor dependent
iii) Range of representable values processor dependent
iv) No way of finding the properties of the approximation
(no standard environmental enquiries)
v) No portable way of expressing algorithmic precision or
Fortran 8x REAL Environmental Enquiries
Based on a "Brown" model of floating point representation
x = s be∑f_k b-k
where b ≧ 2, p ≧ 2, e_min ≦ e ≦ e_max, b > f_1 > 0, b > f_k ≥ 0, k=2, 3, ..., p
HUGE (1 - b-p) be_max
EFFECTIVE PRECISION p if b = 10
INT(log_10 (p-1)) if b ≠ 10
EFFECTIVE_EXP_RANGE INT(MIN(log_10(HUGE), -log_10(TINY)))
Declaration of REAL objects - Selection of approximation method
Concept: Single REAL type (approximation of the mathematical "real")
may select from one or more approximation methods to implement any object.
An approximation method characterised by
a decimal precession (EFFECTIVE_PREC)
a decimal range (EFFECTIVE_EXP_RANGE)
REAL objects declared qualified by a minimum precision and range
e.g.: a) REAL (PRECISION10 = 6, EXP_RANGE = 30) :: A,B,C
b) REAL (PRECISION10 = 10, EXP_RANGE = 50) :: X,Y,Z
c) REAL (25, 100) :: EXTREME_VALUES, VERY_ACCURATE_VALUES
a) selects "single" or "short" precision on all processors
b) selects double precision on IBM, VAX, etc (IEEE)
single precision on CDC, CRAY, etc
c) selects double precision on CDC, CRAY
H precision on VAX
fails on IBM, etc.
Allow user denied exponent character
e.g. REAL_CHAR (10,50) T
N.B. E (or none) indicates default real
D indicates DOUBLE PRECISION
Expressions are evaluated by automatic "widening"
For <a> <operator> <b>
the dyadic operation is preformed in the greater of the two actual precisions, the
operand of lower precision is widened to that of the greater
"Actual argument attributes must match identically those declared with the associated
SUBROUTINE SUB (A, B, C)
REAL (10,50) :: A,B,C
. . .
Only actual arguments also declared to be REAL (10,50) will match.
"A dummy argument can be declared to assume its precision and exponent-range attribute
parameter values from those for the corresponding actual argument"
SUBROUTINE SOLVE (A, B, C)
REAL (*,*) :: A,B,C
. . .
Actual arguments of any precision/range attribute values will match and "pass on"
these values to A, B, C.
N.B. This is current position but has a major technical flaw.
Implied overload, on IBM 33 = 27 versions.
1. Forbid REAL(*,*) declarations.
2. Assumed precision and range arguments must have the route of the assumption made
3. The syntax proposed is
SUBROUTINE SOLVE (A, B, C)
REAL (EFFECTIVE_PRECISION(A), &
EFFECTIVE_EXPONENT_RANGE(A)) :: A, B, C. D
PRECISION and RANGE are assumed from the actual argument "A".
Dummy arguments A, B and C all declared to have the same precision and range.
Actual arguments "A", "B" and "C" must all have the same declared precision and range.
N.B. implied overload => 3 versions only on IBM.
Local variable D declared to have the same precision and range as A, B, and C.
Syntax extends to other type-parameters
e.g. Character length
FUNCTION CFUN (A, B)
CHARACTER (LEN=LEN(A)) :: A, B, CFUN
Character length for arguments are assumed from that of the actual argument, A, which
also determines the length of the result/ Also A and B must be of conformant length.
e.g. Array shape
FUNCTION AFUN (A, B, C)
INTEGER, ARRAY(SIZE(A)) :: A, B, C, AFUN
argument arrays A, B and C must all have rank one and have the same shape as A, as will
the function result. This provides an easy way of expressing shape conformance among
e.g. Parameterised Derived-Types
TYPE VECTOR (PREC, RAN, DIMEN)
REAL(PREC, RAN), ARRAY(DIMEN) :: X
implicitly creates three inquiry functions
TYPE(VECTOR(10,50,3)) :: disp, pos, place
TYPE(VECTOR(5,30,36) :: state(10)
PREC(disp) ≣ EFFECTIVE_PRECISION (disp%X)
DIMEN(state) ≣ 36
FUNCTION VADD (V1, V2). OPERATOR(+)
TYPE(VECTOR(PREC(V1), RAN(V1), DIMEN(V1))) :: V1, V2, VADD
VADD%X = V1%X + V2%X
END FUNCTION VADD
the following statements are valid
place = VADD(disp, pos) ≣ place = disp + pos
place = VADD(place, VADD(disp, pos)) ≣ place = disp + pos + place
but this wold not, violating conformance rules
state(1) = state(2) + place