Inasmuch as it’s something that people ask for, I demoed in the DemoJam at XML Prague that I’d been working on a Relax NG schema and Schematron rules for validating XSL-FO. Most of both the schema and the Schematron were generated directly from the XML source for the XSL 1.1 Recommendation. Additionally, the Schematron used a parser written in XSLT for handling the XSL-FO expression language, so the Schematron could evaluate property values rather than just matching on property value strings.
There was also an oXygen add-on framework in the works, and, naturally, the schema and Schematron also covered Antenna House extensions.
If you look at the screenshot, you’ll see:
- Schematron error for the interrelated
- No error for ‘
column-count="-1 - -2"‘ because the value evaluates as a positive integer.
- oXygen ‘tooltip’ information for
fo:block extracted from the XML for the XSL 1.1 Recommendation.
- The ‘neutral’ and ‘out-of-line’ formatting objects, as well as the XSL 1.1 ‘point’
fo:change-bar-end formatting objects that can appear anywhere inside a
fo:flow, are available where they are allowed.
- Schematron error for the invalid
Inasmuch as it exists as a PDF file, you, too, can have your own copy of my “Schematron Testing Framework” (
stf) poster from XML Prague 2012. I’m happy to say that I received constructive comments about
stf from people at XML Prague 2012 who read the poster, and I’ll be looking at incorporating the feedback in the near future.
One suggestion, from George Bina, was to make a single “framework” file for running the tests – and including the test files in the framework file either directly or by using XInclude to refer to external test files – rather than the current decentralised approach. A single framework file would make it easier to make a report of the results, unlike the the current approach where the idea is that the only report you really want to see is “
<errors/>” when there are no more errors. A single framework file could also become very large and hard to navigate when there’s lots of very similar tests in it. What do you think?
Inasmuch as a suite of Schematron tests contains many contexts where a bug in a document will make a Schematron
assert fail or a
report succeed, it follows that for any new test suite and any reasonably sized but buggy document set, there will straight away be many
report messages produced by the tests. When that happens, how can you be sure your Schematron tests all worked as expected? How can you separate the expected results from the unexpected? What’s needed is a way to characterise the Schematron tests before you start as reporting only what they should, no more, and no less.
stf (https://github.com/MenteaXML/stf) is a XProc pipeline that runs a Schematron test suite on test documents (that you create) and winnows out the expected results and report just the unexpected. stf uses a processing instruction (PI) in each of a set of (typically, small) test documents to indicate the test’s expected
reports: the expected results are ignored, and all you see is what’s extra or missing. And when you have no more unexpected results from your test documents, you’re ready to use the Schematron on your real documents. Continue reading