Recently in .NET Category
John Lam announced the very first pre-alpha drop of IronRuby - Microsoft open source (!) implementation of the Ruby language, licensed under Microsoft Permissive License. This release contains early bits of Ruby implementation for .NET based on the DLR(Dynamic Language Runtime), you actually have to build it if you want to run it. Scott Guthrie shows command line and WPF hello-world sample apps built with IronRuby. IronRuby team takes unique (for Microsoft) approach - not only IronRuby implementation is going to be open source (IronPython is open source already), they actually plan to host it on RubyForge and accept source code contributions: IronRuby Project Plans Next month we will be moving the IronRuby source code repository to be hosted on RubyForge. As part of this move we are also opening up the project to enable non-Microsoft developers to enlist in the project and contribute source code. We'll then work to implement the remaining features and fix compatibility issues found as more libraries and source are ported to run on top of it. The end result will be a compatible, fast, and flexible Ruby implementation on top of .NET that anyone can use for free. Unbelievable. Either Microsoft don't see any money behind IronRuby or this is some kind of evil experiment. I love Ruby and .NET. Sure I will be contributing. Btw, don't be confused with IronRuby and Ruby.NET projects. IronRuby is new Microsoft Ruby implementation, while Ruby.NET is soon-to-be-open-source Ruby implementation started by Queensland University in Australia. While IronRuby uses parts of Ruby.NET compiler, John Lam sees IronRuby as continuation of the Ruby.NET. Both projects seem to be similar and so I believe Ruby.NET story is probably over. I'm playing with IronRuby right now. This is cool stuff, now I want it to be fast, I want full Visual Studio support and I want it to be my primary language. Screw Java and C#, Ruby is where all fun is. And finally another great news about Visual Studio 2008 from Scott Guthrie: You'll see Beta2 ship later this week - so only a few more days now.
I was reading Scott's post on Reflector Addins and had this idea... Now (well, not now but in the next .NET version - Orcas) that XSLT can be compiled into dll, it must be time to think about XSLT decompiler (and appropriate Reflector addin of course). I believe that must be feasible. Would it be useful for you?
Remember that catchy RubyCLR motto? Now C# (Anders Hejlsberg) is playing catch up talking about automatic properties: public string Bar { get; set; } Above is meant to be translated by a compiler into private string foo;
public string Bar
{
get { return foo; }
set { foo = value; }
}
Now I'm not sure I like reusage of the abstract property notation, but still way to go guys.
This is old news, but I somehow missed it so I'll post for news-challenged like me. Microsoft has released "Shared Source Common Language Infrastructure 2.0" aka Rotor 2.0 - buildable source codes of the ECMA CLI and the ECMA C#. This is roughly .NET 2.0 sources with original comments. Priceless! It's released under "MICROSOFT SHARED SOURCE CLI, C#, AND JSCRIPT LICENSE".
New in this release:
- Full support for Generics.
- New C# 2.0 features like Anonymous Methods, Anonymous Delegates and Generics
- BCL additions.
- Lightweight Code Generation (LCG).
- Stub-based dispatch. (What the hell is that?)
- Numerous bug fixes.
There is always the Reflector, but Rotor is different - you build it, debug with it, learn and extend CLI. Now what do I want to play with? Editable XPathDocument or XSLT2DLL compiler or extendable XmlReader factory may be...
LINQ May 2006 CTP installs C# 3.0 compiler and new C# language service into Visual Studio 2005. New syntax, keywords, Intellisense for extension methods and all that jazz.
This essensially disables native C# 2.0 compiler and C# language service. If you installed LINQ on Virtual PC - big deal. But if not and you want to switch C# back to 2.0 - there is a solution. Folder bin contains two little scripts called "Install C# IDE Support.vbs" and "Uninstall C# IDE Support.vbs". Just run latter one and your native C# 2.0 is back. Somehow there are only scripts for C#.
I've been asking for help on getting NDoc working with .NET 2.0 recently. I was lucky enough and Kevin Downs, the developer of NDoc sent me an alpha version of the next NDoc release that was good enough for generating Mvp.Xml API documentation. And that unexpected problem made me realize that NDoc reached such level that we all take this tool as granted, like part of .NET SDK or Visual Studio. While NDoc is actually a free open-source tool developed by enthusiasts! NDoc is so awesome and pervasive that Microsoft even doesn't bother to provide any alternative solutions. Java has javadoc and doclets, while Microsoft provides no tool for generating code documentation and indeed - why, don't you have NDoc? That's an interesting open-source phenomena.
But the cruel fact is that developing such a tool as NDoc for no money is a tough challenge. In .NET 2.0 Microsoft introduced huge amount of changes in both CLR/BCL and SDK docs NDoc has to adapt to, while NDoc project is very low on contributors and donations and Kevin Downs, the guy who writes NDoc has been recovering from some major health problems recently. That actually looked like NDoc project is dead, but it's not! Kevin is working on the next version and it already works as you can see here.
But my point is that we absolutely have to support NDoc project. NDoc saved Microsoft lots of money, while Kevin Downs even has no MSDN subscription and Visual Studio 2005! What a shame... Come on, Microsoft, show some love to NDoc and .NET community! And NDoc users, especially the ones using NDoc in commercial stuff - please support NDoc, donate some money to the project or directly to Kevin Downs.
I'm stuck one step before releasing Mvp.Xml library v2.0 for .NET 2.0. I can't generate API documentation, because NDoc doesn't support .NET 2.0 yet :( Apparently NDoc wiki contains instructions how to get it working with .NET 2.0, but the wiki seems to be down and the google cache is empty... Anybody knows how to hack NDoc?
I can't refrain myself from linking to this wonderful "Does Visual Studio Rot the Mind?" paper by Charles Petzold. Sorry. That's gonna be another good source of citations.
I'm migrating lots of code to the .NET 2.0 nowadays and wanted to warn my readers about one particularly nasty migration issue. When moving from Hashtable to Dictionary<K, V> look carefully at the indexer usage - there is a runtime breaking change here. With Hashtable when you do myhashtable[mykey] and myhashtable doesn't contain mykey you just get null. But Dictionary<K, V> in this case throws KeyNotFoundException! This is of course not due to the evil will of Microsoft, but because unlike Hashtable, Dictionary<K, V> can store not only reference but value types too, so null is not an option anymore. Read a discussion at Brad Abrams's blog for more info about how this shit happened.
Microsoft put hundreds of hours PDC 2005 videos online at http://microsoft.sitestream.com/PDC05. Here is a list of XML-related and others interesting presentations worthwhile watching IMHO.
That's an interesting chat:
C# 3.0 Language Enhancements
Description: Can't attend PDC but still want to talk to the C# team? This chat is your chance! Join the C# team to discuss the newly announced C# 3.0 features like extension methods, lambda expressions, type inference, anonymous types and the .NET Language Integrated Query Framework. You've been hearing rumblings about this for a while, now we finally talk in depth about the future of the C# language.
Add to Calendar
September 22, 2005
1:00 - 2:00 P.M. Pacific time
Additional Time Zones
Here are some amazing facts about Microsoft Visual Studio:
- Visual Studio 2005 will have 2700 commands that come from Microsoft alone, 800 of them - shared ones
- Visual Studio is well factored into 250 basic packages
- Visual Studio is the base for 36 SKU's
- Visual Studio 2003 shipped with 358 keyboard shortcuts
- 410 commands shipped in Visual Studio 2003 have no names
- Visual Studio client teams are using 11 different tree controls, 9 different wizard frameworks, 15 custom tooltip controls
- Visual Studio and Office have a different application font than Windows
- Visual Studio has nearly 6000 images
- In Visual Studio 2005 2900 of these images will be upgraded from 16 color to 32bit color
[From "Visual Studio 2005 UI Guidelines", available with VSIP SDKs].
Good news for those who missed the opportunity to retake a Microsoft Certificaion exam for free - the Second Shot Offer has been extended through August 2005.
So yesterday I passed 70-316 exam ("Developing and Implementing Windows-based Applications with Microsoft Visual C# .NET and Microsoft Visual Studio .NET"). Slack preparation during a week, bad-surprizingly too many questions on darned DataSets, but anyway I got 900 out of 1000. Now that I passed these three exams (70-315, 70-316 and 70-320) I should reach MCAD certification status which I wanted.
Hey, I passed 4 certification exams during last 5 months (3 for Microsoft and one for IBM) and not a single one before in my life. Should be some sort of psychological compensation effect. That was fun actually, but I better stop now. Actually I'm going to take other two exams to reach MCSD, but later, later.
Now back to System.Xml v2.0, I love it already as a new car or new PC. Same feelings!
Hey, just look at this. It says Visual Studio 2005 Beta2 will ship April 25, in just 16 short days and nights. Great. After Visual Studio .NET and Visual Studio .NET 2003 it's third version and it will rock not only according to the third version law. I especially enjoy improvements in XML area. I hope finally to be able to work with XML in Visual Studio and throw XML Spy out.
The day can't go well when it starts with such.
That's really sad to see. The guy, who "used exceptions quite extensively to pass messages from the database all the way to the client", tested (no, "tested") cost of throwing exceptions in .NET on his desktop using such "test":
Sub ThrowException()
Try
Throw New Exception
Catch ex As Exception
Finally
End Try
End Sub
With Console.WriteLine(Now().ToString("hh:mm:ss.fffffff")) of course :)
And the conclusions are amazing:
1. Modern computers are fast. Really fast. Really, really, really, really, really fast.
3. Throwing one exception won't affect performance.
4. Throwing ten exceptions (nested or otherwise) won't affect performance.
5. Throwing one hundred exceptions (nested or otherwise) probably won't affect performance.
6. Throwing one thousand nested exceptions will most definitely cause your application to perform slowly.
7. The call stack actually supports 1000 levels of recursion
8. Some people don't believe Lessons #1, #3, and #4.
9. An individual's Title does not automatically mean they have any clue what they're talking about.
If some one ever says "because it's faster," think of Lesson #1 and #9 and laugh.
That's what I call a manifest of resource wasters!
We've been planning to use NAnt in our product for running customizable scripts and almost convinced our boss to go for it (IBM's Websphere server where all server automation is implemented via Ant is good argument here). But unfortunately we've found out that Ant and NAnt have different licenses. Ant is of course released under very pointy-haired-boss-friendly Apache Software License Version 2.0, while NAnt (which I mistakenly thought is just .NET clone of the Ant) - is under scary GNU-compatible license, which may be a red light for some companies. So now we are waiting for a legal department's answer on using GNU licensed software in our product :(
Kenny Kerr has posted another instalment in his amazing "Introduction to MSIL" blog series. It's about brilliant for-each construct, which was introduced by Visual Basic and now adopted by VB.NET, C#, C++ and even Java. Worth reading.
Besides I very like that idea of learning from blogs - you know, you just skim feeds, read what caught your attention - and learn new things. Oh, and what are these new things you learn - that depends on your interests of course!
C# Chat: The C# IDE
Have some questions about expansions, intellisense, or type colorization? Have some suggestions for or comments about refactoring support? Join the C# IDE team to discuss the past, present and future of the C# IDE.
December 2, 2004
1:00 - 2:00 P.M. Pacific time
Add to Calendar
Here is what I learnt from Jackie Goldstein's talk on .NET Worst Practices at the .Net Deep Dive conference in Tel-Aviv last Thursday. There is a subtle, but hugely important difference between how .NET and Java re-throw a caught exception and I missed that somehow when been learning .NET. Not that I didn't know what "throw;" does in C#, I was mistaken about what "throw ex;" does!
In Java, when you do "throw ex;", ex is being re-thrown as if it wasn't caught at all - no informantion about re-throwing is ever recorded and original stack trace info is preserved. If you do want to start exception's stack trace from the re-throwing point - oh, that's completely different story, you need to refill exception's stack trace using fillInStackTrace() method.
In .Net however, when you do "throw ex;", ex is being re-thrown, but the original stack trace info gets overriden. The point where exception is re-thrown is now becoming the exception's origin. Here is what I mean. If you do follow your Java habits and write
using System;
public class MyApp
{
public static void F()
{
throw new NotImplementedException("Too lazy to implement!");
}
public static void Main()
{
try
{
F();
}
catch (Exception e)
{
Console.WriteLine("Exception {0} has occured!", e.GetType());
throw e; //Line 18
}
}
}
you'll get:
Exception System.NotImplementedException has occured!
Unhandled Exception: System.NotImplementedException: Too lazy to implement!
at MyApp.Main() in d:\projects\test\class2.cs:line 18
See, you've lost the original exceptions's stack trace and now you gonna have really hard time to figure out what was wrong actually, where the exception was thrown at the first place.
So in .NET you have to use "throw" keyword ("Throw" in VB.NET) with no argument to perform a pure re-throwing of an exception - change the line 18 to just "throw;" and the result will be
Exception System.NotImplementedException has occured!
Unhandled Exception: System.NotImplementedException: Too lazy to implement!
at MyApp.F() in d:\projects\test\class2.cs:line 6
at MyApp.Main() in d:\projects\test\class2.cs:line 18
Now you can see the full exception stack trace.
Basically MSIL (CIL) has two instructions - "throw" and "rethrow" and guess what - C#'s "throw ex;" gets compiled into MSIL's "throw" and C#'s "throw;" - into MSIL "rethrow"! Basically I can see the reason why "throw ex" overrides the stack trace, that's quite intuitive if you think about it for a moment. But "throw" syntax for "rethrow" instruction is not really intuitive. It smells stack based MSIL, which is obviously under the cover, but actually should be kept there. I guess they wanted to keep number of C# keywords small, that's the reason. So you just better know this stuff - use "throw;" to re-throw an exception in .NET.
Well, that's just a simple level 100 quiz aiming to imprint "standard random number generators are not really random" program to those who still lack it.
What will produce the following C# snippet?
System.Random rnd = new System.Random(12345);
System.Random rnd2 = new System.Random(12345);
for (int i=0; i<1000; i++)
if (rnd.Next() != rnd2.Next())
Console.WriteLine("Truly random!");
I got a problem. It's .NET problem. In XInclude.NET I'm fetching resources by URI using WebRequest/WebResponse classes. Everything seems to be working fine, the only problem is as follows: when the URI is file system URI, the content type property is always "application/octet-stream". Looks like it's hardcoded in System.Net.FileWebResponse class (sic!). I mean - when I open Windows Explorer the file's properties are: "Type of the file: XML File" and "Opens with: XMLSPY". So the Windows definitely knows it's XML and in the registry I can see .xml file extension is associated with "text/xml" content type, so why FileWebResponse always says "application/octet-stream"? Am I doing something wrong or it's soo limited in that matter? Any workarounds?
Interesting news from Microsoft Research:
The F# compiler is an implementation of an ML programming language for .NET. F# is essentially an implementation of the core of the OCaml programming language (see http://caml.inria.fr). F#/OCaml/ML are mixed functional-imperative programming languages which are excellent for medium-advanced programmers and for teaching. In addition, you can access hundreds of .NET libraries using F#, and the F# code you write can be accessed from C# and other .NET languages.
Find more on F# homepage.
Wesner Moise (.NET Undocumented) writes on enums perf in .NET.
While enums are value types and are often recognized and treated like standard integral values by the runtime (in IL, enums and integers have almost no distinction), there are few performance caveats to using them.
Enumerated types are derived from ValueType and Enum (as well as Object), which are, ironically, reference types. An explicit conversion of an enum value to ValueType, will actually perform boxing and generate an object reference.
Any calls to an inherited method from any of those classes will also actually invoke boxing, prior to calling the base method. This includes the following methods: GetType(), ToString(), GetHashCode() and Equals(). In addition the costs of mplicit boxing is the far larger costs of reflection used to actually complete the said methods.
That's obvious, but this is not really:
ToString uses reflection, the first time it is called, to dynamically retrieve enumeration constants from the enumerated type and stores those values into a hash table for future use. However, GetHashCode always uses reflection to retrieve the underlying value. While ValueType.Equals will attempt to do a fast bit check, when a valuetype with no reference methods, such as is the case for enumerated types, it won't be faster than a direct compare.
This is true for any value type, but normally the cost can be eliminated for ToString, GetHashCode, and Equals, by simply overriding those methods and avoiding calls to the base methods. However, those methods CANNOT be overridden for enumerated types.
And this is sad:
Another ironic conclusion is that creating your own version of an enumerated type, not derived from Enum, is going to be faster than the CLR versions, because you can ensure that GetHashCode, Equals, ToString, IComparable, and IComparable<T> are not inherited from any of base classes such as ValueType.
Now what? Back to Java "enums "?
Mikhail Arkhipov is trying to come up with any reasonable syntax for expressing generic controls in future versions of ASP.NET (he doesn't think it will be in Whidbey). So far all of them look plain ugly or unextendable (e.g. WRT to multiple types), needless to say malformed according to XML or even SGML:
<vc:SomeGenericControl<SomeObjectType> runat="server" />
<vc:SomeGenericControl:SomeObjectType runat="server" />
<vc:SomeGenericControl.SomeObjectType runat="server" />
<vc:SomeGenericControl(SomeObjectType1.SubType1, SomeObjectType2.SubType2)
runat="server" />
Any ideas?
Wesner Moise (.NET Undocumented blog) compares old good .NET 1.X System.Collections.Hashtable and brand new Whidbey Dictionary<K,V>. Interesting. In short:
- New collision elimination strategy - chaining instead of probing. Yeah, array based linked list for each bucket. Allegedly it doubles perf! Who said linked lists are just interviewers' toy?
- As a consequence - more thrifty memory usage, especially when storing value types.
- Dictionary preserves order of keys.
- Empty Dictionary occupies only 40 bytes.
- Struct-based enumerators hence fast enumeration.
- No probing hence no more load factor.
Oh boy, what a month. Here is another juicy release I wish I had any free time to dig in: VSIP SDK 2005 Beta 1.
Mono project (an open source implementation of the .NET framework for Linux, Unix and Windows) reached Beta1 stage. They say Mono 1.0 can be released this summer already.
Now to funny part. I've been reading Release Notes while downloading the release and found myself in the contributors list :) Well, in fact there are some classes in Mono codebase marked as created by me. But I should admit my contribution was really a small one - several patches, several trivial classes (in System.Xml.Xsl namespace) and then I lost my interest due to personal reasons. I still have write access to the Mono CVS, so may be, some day, who knows, again...
Visual Studio 2005 Community Technology Preview March 2004 - Full DVD available for MSDN subscribers!
It's been Microsoft DevDays 2004 in Israel today. Well, DevDay actually. Here are the impressions I got there:
- One has to get up earlier to not miss the keynote.
- VS.NET has cool PocketPC emulator.
- Code Access Security is omnipotent.
- Lutz Roeder's .NET Reflector may hang out in the middle of a presentation.
- WS-Security is great and Yosi Taguri is bright speaker, but he scrolls code too fast.
- Zero Deployment is amazingly simple.
- They are really anxious about security nowadays. All attendants have been given "Writing Secure Code" book for free. Aaah, bookworm's joy. "Required reading at Microsoft. - Bill Gates" is written on the book's front page.
MSDN starts new Data Access & Storage Developer Center, msdn.microsoft.com/data, "the home for developer information on Microsoft data technologies from MSDN" (via Chris Sells). Great, worth to subscribe. List of related bloggers (indispensable attribute of any portal nowadays) - http://msdn.microsoft.com/data/community/blogs. Stars like Dino Esposito, Mike Gunderloy, Andrew Conrad, Michael Rys, Dare Obasanjo and Christa Carpentiere (the editor of this Dev Center) are amongst them. I bet Data Access & Storage Developer Center's gonna rock. Smart people plus great technology, a perfect match.
I wonder whether XML Developer Center is next on the MSDN launch pad and who will be on the editor chair?
First they have closed blogs. Now Dare's moved RSS Bandit project to SourceForge. Hmmm...
Today I felt the Uroboros snake breathing just in my cubicle when I realized XSLT is able to write output to the input tree. Funny, huh?
XmlDocument doc = new XmlDocument();
doc.Load("input.xml");
XslTransform xslt = new XslTransform();
xslt.Load("test.xsl");
XmlNodeWriter nw = new XmlNodeWriter(doc.DocumentElement, false);
xslt.Transform(doc, null, nw);
nw.Close();
This transformation outputs result tree directly to the document element of the input tree! Moreover, during the transformation process the input tree is being dynamically changed and XSLT processor even is able to see the output tree in input and to process it again!
Of course you'd better then not to cycle transformation forever using plain <xsl:copy-of select="/"/>.
Practical usage? Highly-efficient update of in-memory DOM using XSLT with no any interim buffers. Kinda dangerous though, because output can destroy input prior it's processed or to loop forever, but nice one anyway.
Just found new beast in the Longhorn SDK documentation - OPath language: The OPath language is the query language used to query for objects using an ObjectSpace. The syntax of OPath also allows you to query for objects using standard object oriented syntax. OPath enables you to traverse object relationships in a query as you would with standard object oriented application code and includes several operators for complex value comparisons.
Orders[Freight > 5].Details.Quantity > 50 OPath expression should remind you something familiar. Object-oriented XPath cross-breeded with SQL? Hmm, xml-dev flamers would love it.
The approach seems to be exactly opposite to ObjectXPathNavigator's one - instead of representing object graphs in XPathNavigable form, brand new query language is invented to fit the data model. Actually that makes some sense, XPath as XML-oriented query language can't fit all. I wonder what Dare think about it. More studying is needed, but as for me (note I'm not DBMS-oriented guy though) it's too crude yet.
Well, it's extremely well-chewed topic well-covered by many posters, but provided people keep asking it I feel I have to give a complete example of the most effective way (IMO) of solving this old recurring question - how to transform CSV or tab-delimited file using XSLT?
The idea is to represent non-XML formatted data as pure XML to be able to leverage many's favorite XML hammer - XSLT. I want to make it clear that approaching the problem this way doesn't abuse XSLT as XML transformation language. Non-XML data is being represented as XML and XSLT operates on it via XPath data model prism actually having no idea it was CSV file on the hard disk.
Let's say what's given is this tab-delimited file, containing some info such as customer ID, name, address about some customers. You need to produce HTML report with customers grouped by country. How? Here's how: all you need is XmlCSVReader (cudos to Chris Lovett), XSLT stylesheet and couple lines of code to glue the solution:
Code:
using System;
using System.Xml;
using System.Xml.XPath;
using System.Xml.Xsl;
using System.IO;
using Microsoft.Xml;
public class Sample {
public static void Main() {
//XMLCSVReader setup
XmlCsvReader reader = new XmlCsvReader();
reader.Href = "sample.txt";
reader.Delimiter = '\t';
reader.FirstRowHasColumnNames = true;
//Usual transform
XPathDocument doc = new XPathDocument(reader);
XslTransform xslt = new XslTransform();
xslt.Load("style.xsl");
StreamWriter sw = new StreamWriter("report.html");
xslt.Transform(doc, null, sw);
sw.Close();
}
}
XSLT stylesheet
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:key name="countryKey" match="/*/*" use="country"/>
<xsl:template match="root">
<html>
<head>
<title>Our Customers Worldwide</title>
</head>
<body>
<table style="border:thin solid orange;">
<xsl:for-each select="*[count(.|key('countryKey',
country)[1])=1]">
<xsl:sort select="country"/>
<tr>
<th colspan="2"
style="text-align:center;color:blue;">
<xsl:value-of select="country"/>
</th>
</tr>
<tr>
<th>Customer Name</th>
<th>Account Number</th>
</tr>
<xsl:apply-templates
select="key('countryKey', country)"/>
</xsl:for-each>
</table>
</body>
</html>
</xsl:template>
<xsl:template match="row">
<tr>
<xsl:if test="position() mod 2 = 1">
<xsl:attribute name="bgcolor">silver</xsl:attribute>
</xsl:if>
<td>
<xsl:value-of
select="concat(fname, ' ',mi, ' ', lname)"/>
</td>
<td>
<xsl:value-of select="account_num"/>
</td>
</tr>
</xsl:template>
</xsl:stylesheet>
Resulting HTML:
Canada |
Customer Name |
Account Number |
Derrick I. Whelply |
87470586299 |
Michael J. Spence |
87500482201 |
Brenda C. Blumberg |
87544797658 |
Mexico |
Customer Name |
Account Number |
Sheri A. Nowmer |
87462024688 |
Rebecca Kanagaki |
87521172800 |
Kim H. Brunner |
87539744377 |
USA |
Customer Name |
Account Number |
Jeanne Derry |
87475757600 |
Maya Gutierrez |
87514054179 |
Robert F. Damstra |
87517782449 |
Darren M. Stanz |
87568712234 |
Main virtue of this approach is that all transformation and presentation logic is concentrated in only one place - XSLT stylesheet (add CSS according to your taste), C# code is fully agnostic about data being processed. In the same fashion CSV file can be queried using XQuery or XPath. Once the data is represented as XML, all doors are open.
Seems like old dreams about deep extending VisualStudio.NET up to adding new languages, editors and debuggers without funny-not-for-me COM programming but using beloved C# finally come true! Microsoft is inviting beta testers to VSIP Extras Beta program. The killer feature:
.NET Framework support. Interop assemblies are provided to allow VSIP packages to be developed in C#, managed extensions for C++, or Visual Basic. New samples have been provided in managed languages and the documentation has been updated to include information about managed code development.
Go to fill Beta Nomination Survey, may be you are lucky enough to be choosen.
I've got a bunch of ideas, from XSLT debugger to XQuery editor, postponed till this can be done in C#, because I'm really weak in COM.
While Don Box is declaiming of the VB glory, Mark Fussel is busy with quite opposite bussiness - he's reading the burial service over XmlDocument aka DOM, worth to quote as a whole:
The XML DOM is dead. Long live the DOM.
Dearest DOM, it is with little remorse,
to see that your API has run its course.
You expose your nodes naked and bare,
with no chance of any optimizations there.
Your (cough) data model is just to complex,
and causes developers to vex
over how to deal with CDATA, notations and entity refs.
So it is with a small tear welling in my eye,
that I watch the completion of your demise.
In .NET the XPathDocument has now taken your throne,
as the king of the XML API-dom.
Goodbye DOM, just disappear and die,
I will not miss you with your unweildly API.
Goodbye DOM, goodbye.
RIP DOM. Viva XPath!
So, nxslt version 1.3 is at your service. New features include:
- Support for XML Inclusions (XInclude) 1.0 Candidate Recommendation. Done by incorporating XInclude.NET library into nxslt. XML Inclusions are processed in both source XML and XSLT stylesheet, by default it's turned on and can be disabled using -xi option.
- Improved EXSLT support. Now nxslt leverages EXSLT.NET implementation. That means more EXSLT extension functions supported with much better performance and compatibility.
- Small advanced feature for EXSLT.NET developers - support for external EXSLT.NET assembly.
Download it here or here (GotDotNet). It's free of course. Thorough documentation is here.
Today is the day, I'm glad to announce XInclude.NET 1.0 release. Download it here. For those who have no idea what XInclude.NET is:
XInclude.NET is free open-source implementation of XInclude 1.0 Candidate Recommendation and XPointer Framework Recommendation written in C# for .NET platform. XInclude.NET supports XPointer element() Scheme, XPointer xmlns() Scheme, XPointer xpath1() Scheme and XPointer xpointer() Scheme (XPath subset only).
Changes since 1.0beta release:
- Support for XPointer xpointer() schema (XPath subset only)
- Bug fixes
- Performance improvements
No big deal, but it took me the whole yesterday to fix reported bugs, optimize a bit and prepare the release. Hope you'll like it.
Now, the article about this plumbing is the agenda.
Now it's time to come back to my beloved XML plumbing - XInclude and XPointer. A bit of polish and tomorrow I'm going to release XInclude.NET 1.0. Changes since 1.0beta - XPointer xpointer() schema support (XPath subset only), bug fixes and minor performance improvements.
Along with that I've started an article about XInclude and XInclude.NET, what a good exercise for brains, much harder than regular programming. So more to come.
//Whoohaa!
XPathExpression expr = nav.Compile("set:distinct(//author)");
expr.SetContext(new ExsltContext(doc.NameTable));
XPathNodeIterator authors = nav.Select(expr);
while (authors.MoveNext())
Console.WriteLine(authors.Current.Value);
EXSLT's set:distinct in XPath-only selection. Sweet. Coming soon, watch announcements!
In related news - yesterday I've been given Mono CSV commit access, thanks to Ben and Miguel. Seems like I'm the first Oleg amongst Mono guys, so my account is just "oleg".
Now I desperately need one more hour in a day, it's a pity the Earth is so close to the Sun, 24 hours is really not enough for us!
Am I right that it's impossible to validate in-memory XmlDocument without serializing it to string and reparsing?
XmlValidatingReader requires instance of XmlTextReader and what's worse it uses its internal properties, not exposed as XmlTextReader public API, so that won't work even if one would provide fake instance of XmlTextReader, which encapsulates XmlNodeReader within. :(
According to XPath data model an element node may have a unique identifier (ID), which can be used then to select a node by its ID using XPath's id() function and to navigate using XPathNavigator.MoveToId method. Querying by ID is extremely effective becuse in fact it doesn't require traversing the XML document, instead almost every XPath implementation I've ever seen just keeps internal hashtable of IDs, hence querying by ID is merely a matter of getting a value from a hashtable by a key.
XPath 1.0 Recommendation published back in 1999 of course says nothing about XML Schema, which was published in year 2001. May be that's the reason why XmlDocument and XPathDocument (and therefore XslTransform) classes in .NET don't support above tasty functionality when XML document is defined using XML Schema. Only DTD is supported unfortunately. Even if you have defined xs:ID typed attribute in your schema and validated document reading it via XmlValidatingReader it won't work. As a matter of fact it does work in MSXML4 though.
Whether it's right or wrong - I have no idea, it's quite debatable question.
On the one hand XPath spec explicitly says "If a document does not have a DTD, then no element in the document will have a unique ID.". On the other hand XML Schema was published 2 years after XPath 1.0 and provides semantically the same functionality as DTD does, so XPath 2.0 is now deeply integrated with XML Schema. And it works in MSXML4... I'm wondering what people think about it?
Anyway, here is another act of hackery: how to force XmlDocument and XPathDocument classes to turn on id() and XPathNavigator.MoveToId support when document is validated against XML Schema and not DTD. Apparently XmlValidatingReader collects ID information anyway, but it's being asked for this collection only when XmlDocument/XPathDocument encounter DocumentType node in XML. So let's give them this node, I mean let's emulate it. Here is the code:
public class IdAssuredValidatingReader : XmlValidatingReader {
private bool _exposeDummyDoctype;
private bool _isInProlog = true;
public IdAssuredValidatingReader(XmlReader r) : base (r) {}
public override XmlNodeType NodeType {
get {
return _exposeDummyDoctype ?
XmlNodeType.DocumentType :
base.NodeType;
}
}
public override bool MoveToNextAttribute() {
return _exposeDummyDoctype?
false :
base.MoveToNextAttribute();
}
public override bool Read() {
if (_isInProlog) {
if (!_exposeDummyDoctype) {
//We are looking for the very first element
bool baseRead = base.Read();
if (base.NodeType == XmlNodeType.Element) {
_exposeDummyDoctype = true;
return true;
} else {
return baseRead;
}
} else {
//Done, switch back to normal flow
_exposeDummyDoctype = false;
_isInProlog = false;
return true;
}
} else
return base.Read();
}
}
And proof of concept:
source.xml
<root
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="D:\Untitled1.xsd">
<file id="F001" title="abc" size="123"/>
<file id="F002" title="xyz" size="789"/>
<notification id="PINK" title="Pink Flowers"/>
</root>
In Untitled1.xsd schema (elided for clarity) id attributes are declared as xs:ID.
The usage:
public class Test {
static void Main(string[] args) {
XmlValidatingReader vr =
new IdAssuredValidatingReader(
new XmlTextReader("source.xml"));
vr.ValidationType = ValidationType.Schema;
vr.EntityHandling = EntityHandling.ExpandEntities;
XmlDocument doc = new XmlDocument();
doc.Load(vr);
Console.WriteLine(
doc.SelectSingleNode("id('PINK')/@title").Value);
}
}
Another one:
public class Test {
static void Main(string[] args) {
XmlValidatingReader vr =
new IdAssuredValidatingReader(
new XmlTextReader("source.xml"));
vr.ValidationType = ValidationType.Schema;
vr.EntityHandling = EntityHandling.ExpandEntities;
XPathDocument doc = new XPathDocument(vr);
XPathNavigator nav = doc.CreateNavigator();
XPathNodeIterator ni = nav.Select("id('PINK')/@title");
if (ni.MoveNext())
Console.WriteLine(ni.Current.Value);
}
}
In both cases the result is "Pink Flowers".
I'm not sure which semantics this hack breaks. The only deficiency I see is that the dummy emulated DocumentType node becomes actually visible in resulting XmlDocument (XPathDocument is not affected because XPath data model knows nothing about DocumentType node type).
Any comments?
An interesting question has been raised in microsoft.public.dotnet.xml newsgroup: how to compile XPath expression without a XML document at hands? XPathNavigator class does provide such functionality via Compile() method, but XPathNavigator is abstract class hence this functionality is available only to its implementers, such as internal DocumentXPathNavigator and XPathDocumentNavigator classes, which are accessible only via corresponding XmlDocument and XPathDocument. Therefore obvious solutions are: using dummy XmlDocument or XPathDocument object to get XPathNavigator and make use of its Compile() method or implement dummy XPathNavigator class. Dummy object vs dummy implementation, hehe. Well, dummy implementation at least doesn't allocate memory, so I'm advocating this solution. Below is the implementation and its usage:
public sealed class XPathCompiler {
private sealed class DummyXpathNavigator : XPathNavigator {
public override XPathNavigator Clone() {
return new DummyXpathNavigator();
}
public override XPathNodeType NodeType {
get { return XPathNodeType.Root; }
}
public override string LocalName {
get { return String.Empty; }
}
public override string NamespaceURI {
get { return String.Empty; }
}
public override string Name {
get { return String.Empty; }
}
public override string Prefix {
get { return String.Empty; }
}
public override string Value {
get { return String.Empty; }
}
public override string BaseURI {
get { return String.Empty; }
}
public override String XmlLang {
get { return String.Empty; }
}
public override bool IsEmptyElement {
get { return false; }
}
public override XmlNameTable NameTable {
get { return null; }
}
public override bool HasAttributes {
get { return false; }
}
public override string GetAttribute(string localName,
string namespaceURI) {
return string.Empty;
}
public override bool MoveToAttribute(string localName,
string namespaceURI) {
return false;
}
public override bool MoveToFirstAttribute() {
return false;
}
public override bool MoveToNextAttribute() {
return false;
}
public override string GetNamespace(string name) {
return string.Empty;
}
public override bool MoveToNamespace(string name) {
return false;
}
public override bool MoveToFirstNamespace(XPathNamespaceScope
namespaceScope) {
return false;
}
public override bool MoveToNextNamespace(XPathNamespaceScope
namespaceScope) {
return false;
}
public override bool HasChildren {
get { return false; }
}
public override bool MoveToNext() {
return false;
}
public override bool MoveToPrevious() {
return false;
}
public override bool MoveToFirst() {
return false;
}
public override bool MoveToFirstChild() {
return false;
}
public override bool MoveToParent() {
return false;
}
public override void MoveToRoot() {}
public override bool MoveTo( XPathNavigator other ) {
return false;
}
public override bool MoveToId(string id) {
return false;
}
public override bool IsSamePosition(XPathNavigator other) {
return false;
}
public override XPathNodeIterator SelectDescendants(string name,
string namespaceURI, bool matchSelf) {
return null;
}
public override XPathNodeIterator SelectChildren(string name,
string namespaceURI) {
return null;
}
public override XPathNodeIterator
SelectChildren(XPathNodeType nodeType) {
return null;
}
public override XmlNodeOrder
ComparePosition(XPathNavigator navigator) {
return new XmlNodeOrder();
}
}
private static XPathNavigator _nav =
new DummyXpathNavigator();
public static XPathExpression Compile(string xpath) {
return _nav.Compile(xpath);
}
}
public class XPathCompilerTest {
static void Main(string[] args) {
//Document-free compilation
XPathExpression xe = XPathCompiler.Compile("/foo");
//Usage of the compiled expression
XPathDocument doc =
new XPathDocument(new StringReader("<foo/>"));
XPathNavigator nav = doc.CreateNavigator();
XPathNodeIterator ni = nav.Select(xe);
while (ni.MoveNext()) {
Console.WriteLine(ni.Current.Name);
}
}
}
This weekend I was completely unplugged, my wife took me away of computers and we drove to Tiberias. No laptop, no internet, just two days of swimming in the Sea of Galilee aka Kineret and fish-eating. It was great.
Apparently at the same time my article I was talking about finally made its appearance at XML Extreme Column of MSDN. Here it is: "Producing Multiple Outputs from an XSL Transformation". It's about how to achieve multiple output XSLT in .NET. My first article, so any comments espacially critical ones will be greatly appreciated. Is it well-written or at least clear? Should MultiXmlTextWriter be developed further? I've been thinking about HTML output method, this can be done by creating HTMLTextWriter:XmlWriter, like System.Web.UI.HtmlTextWriter one, but implementing XmlWriter instead. Probably not bad idea.
So version 1.2 of nxslt released.
Changes since 1.1:
- built-in support for 60 EXSLT extension functions (huge thanks to Dare)
- support for custom extension functions
- minor bug fixes
Built-in support for 60 EXSLT extension functions (yes, with conformant names :), full list of supported functions:
The killer!
New revelation from Chris Brumme, now about AppDomains. A must reading.
Here is another easy-to-solve-when-you-know-what-is-wrong problem. It took me couple of hours to find the solution, so I wanna share it. Hope it'll be useful to anybody.
The problem. When adding custom XPath extension functions as described in "HOW TO: Implement and Use Custom Extension Functions When You Execute XPath Queries in Visual C# .NET" KB article and "Adding Custom Functions to XPath" article at MSDN Extreme XML column you can find that any XPath expressions having namespace prefixes, like "/foo:bar" just can't be evaluated due to nasty System.ArgumentNullException deeply in the XPath engine.
The reason. It turned out that internal XPath classes, e.g. BaseAxisQuery expect custom XsltContext implementation to resolve namespaces prefixes (XsltContext extends XmlNamespaceManager) with respect to the NameTable, just as internal default XsltContext implementation - UndefinedXsltContext class does. The documentaion unfortunately omits that point and sample implementation in the above articles too.
The solution. Just override LookupNamespace(string prefix) method in your XsltContext implementation and pass given prefix through the NameTable:
public override string LookupNamespace(string prefix) {
if (prefix == String.Empty)
return String.Empty;
string uri = base.LookupNamespace(NameTable.Get(prefix));
if (uri == null)
throw new XsltException("Undeclared namespace prefix - " +
prefix, null);
return uri;
}
Easy, ain't it? I'm stupid spent two hours to get it.
Update: This hack is about .NET 1.X. In .NET 2.0 you don't need it. In .NET 2.0 with XslCompiledTransform class you can return a nodeset as XPathNodeNavigator[].
As all we know, unfortunately there is a confirmed bug in .NET Framework's XSLT implementation, which prevents returning a nodeset from an XSLT extension function. Basically the problem is that XSLT engine expects nodeset resulting from an extension function to be an object of internal ResetableIterator class. Full stop :(
Some workarounds were discovered, first one - to create new interim DOM object and query it by XPath, what returns instance of ResetableIterator class. Main deficiency - loss of nodes identity, because returned nodes belong to the interim DOM tree, not to input nodeset. Another workaround, discovered by Dimitre Novatchev is to to run interim XSL transformation within an extension function - this also allows to create instance of ResetableIterator class to return.
This morning I've found another workaround, which doesn't require creation of any interim objects. It's frontal attack and someone would call it a hack, but I wouldn't. Here it is. There is internal XPathArrayIterator class in System.Xml.XPath namespace, which represents XPathNodeIterator over ArrayList and also kindly implements our beloved ResetableIterator class. So why not just instantiate it by reflection and return from an extension function, huh?
Assembly systemXml = typeof(XPathNodeIterator).Assembly;
Type arrayIteratorType =
systemXml.GetType("System.Xml.XPath.XPathArrayIterator");
return (XPathNodeIterator)Activator.CreateInstance(
arrayIteratorType,
BindingFlags.Instance | BindingFlags.Public |
BindingFlags.CreateInstance,
null, new object[]{myArrayListofNodes},
null);
Below is proof-of-concept extension function to filter distinct nodes from a nodeset:
Extension function impl and test class:
using System;
using System.Xml.XPath;
using System.Xml.Xsl;
using System.IO;
using System.Reflection;
using System.Collections;
namespace Test2 {
class Test {
static void Main(string[] args){
XPathDocument doc = new XPathDocument(args[0]);
XslTransform trans = new XslTransform();
trans.Load(args[1]);
XsltArgumentList argList = new XsltArgumentList();
argList.AddExtensionObject("http://foo.com",
new MyXsltExtension());
trans.Transform(doc, argList, new StreamWriter(args[2]));
}
}
public class MyXsltExtension {
public XPathNodeIterator distinct(XPathNodeIterator nodeset) {
Hashtable nodelist = new Hashtable();
while(nodeset.MoveNext()) {
if(!nodelist.Contains(nodeset.Current.Value)) {
nodelist.Add(nodeset.Current.Value, nodeset.Current);
}
}
Assembly systemXml = typeof(XPathNodeIterator).Assembly;
Type arrayIteratorType =
systemXml.GetType("System.Xml.XPath.XPathArrayIterator");
return (XPathNodeIterator)Activator.CreateInstance(
arrayIteratorType,
BindingFlags.Instance | BindingFlags.Public |
BindingFlags.CreateInstance,
null, new object[]{new ArrayList(nodelist.Values)},
null);
}
}
}
Source xml doc (exsl:distinct()'s example):
<doc>
<city name="Paris"
country="France"/>
<city name="Madrid"
country="Spain"/>
<city name="Vienna"
country="Austria"/>
<city name="Barcelona"
country="Spain"/>
<city name="Salzburg"
country="Austria"/>
<city name="Bonn"
country="Germany"/>
<city name="Lyon"
country="France"/>
<city name="Hannover"
country="Germany"/>
<city name="Calais"
country="France"/>
<city name="Berlin"
country="Germany"/>
</doc>
Stylesheet:
<xsl:stylesheet
xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"
xmlns:ext="http://foo.com" extension-element-prefixes="ext">
<xsl:template match="/">
<distinct-countries>
<xsl:for-each select="ext:distinct(//@country)">
<xsl:value-of select="."/>
<xsl:if test="position() != last()">, </xsl:if>
</xsl:for-each>
</distinct-countries>
</xsl:template>
</xsl:stylesheet>
And the result is:
<distinct-countries>
Germany, Austria, Spain, France
</distinct-countries>
I like it. Comments?
Just released XInclude.NET 1.0beta.
Changes since 1.0alpha:
So enjoy.
Sometimes at rainy days of our life we can found ourself looking for a way to create something impossible, say a method containing dash in its name ;)
Well, if it seems to be impossible in one reality, try another one. It's impossible in C#, but it's possible in MSIL, so here is a hack:
- Disassemble your dll or executable using the MSIL Disassembler:
ildasm.exe /out=Lib.il Lib.dll
(Note, ildasm creates also resource file Lib.res along with Lib.il, you'll need this file afterwards).
-
Find your method in the decompliled MSIL (Lib.il), usually it looks like
.method public hidebysig instance string
FunnyMethod(string s) cil managed
and make its name more funny, inserting a dash (then you have to surround method's name by apostrophes to satisfy the syntax analyzer):
.method public hidebysig instance string
'Funny-Method'(string s) cil managed
- Now just assemble fixed MSIL file back to dll or executable using the MSIL Assembler:
ilasm.exe Lib.il /RESOURCE=Lib.res /DLL
That's it, you've created Lib.dll assembly, which contains Funny-Method(string) method in your class. Of course you can't invoke this method directly, but only through reflection, but sometimes that's enough.
Oh, and last thing - it's a hack, don't use it.
I've implemented XPointer support (shorthand pointer, xmlns(), element() and xpath1() schemas) for the XInclude.NET project.
(Btw, I'm wondering if XPointer may be useful not only in XInclude context?)
It was really fun and good exercise. Here are some details:
Parsing. XPointer grammar is actually one of the simplest and can be easily parsed even by regexp, as Gudge has demonstrated in his implementation. But I'm not regexp fan, especially for parsing. (I'm lex/yacc fan for ages). Instead I decided to write custom lexer and parser, just as .NET guys did for XPath and C#. Lexer (aka scanner) scans the expression char by char, taking care about escaping and builds low-level lexemes (NCName, QName, Number etc). Parser then assembles those lexemes into a higher-level grammar constructs (PointerPart, SchemaName, SchemaData etc) according to the grammar and builds XPointer object model, aka compiled XPointer pointer, ready for evaluation. It took me the whole day, but now I can agree to some degree with Peter Hallam, when he explained why they didn't use lex/yacc in C# compiler - sometimes it's really more fast and maintainable than lex/yacc based solution.
Evaluating. Well, I chose easy way and implemented XPointer evaluation using XmlDocument, just as Gudge did. It's so attractively easy. XPathDocument though should be a better candidate from many points of view: performace (it's more optimized for XPath evaluation), memory footprint (it's read-only) and data model conformance (there are subtle differences between underlying XmlDocument and XPathDocument data models, e.g. about adjacent text nodes - DOM allows them, but XPath data model doesn't). I'll consider to move to XPathDocument later, that would additionally require XmlReader wrapper around XPathNavigator, but fortunately Don has solved that problem already.
That's it. It looks quite powerful and seems to be working fine. E.g.
<xi:inlcude href="test2.xml#xmlns(foo=http://foo.com)
xpath1(//foo:item[@name='bar'])
element(items3/2)"/>
This includes all item elements in "http://foo.com" namespace, which have "bar" as name attribute's value or if such not found for some reason it includes second child element of the element with "items3" ID.
Now cleaning, commenting, documenting, testing and releasing.
Working on XPointer parser for the XInclude.NET project I just realized there is no way (if I'm not mistaken) in .NET to check if a character is XML whitespace character. Plus all that functionality needed when parsing XML lexical constructs. No big deal, had to resort to old java trick:
public static bool IsWhitespace(char ch) {
return (ch <= 0x0020) &&
(((((1L << 0x0009) |
(1L << 0x000A) |
(1L << 0x000C) |
(1L << 0x000D) |
(1L << 0x0020)) >> ch) & 1L) != 0);
}
And that's a double pity, because XmlCharType class does implement all that XML-related lexical jazz in a very optimized way, but it's internal and not all of its power is exposed through other means (e.g. it's possible to verify a string as XML NCName using XmlConvert.VerifyNCName(string) method, which leverages XmlCharType underneath).
More good news: as Joshua Allen has confirmed, they are working on making XmlReader easier to implement. Primarily by "making some stuff that is currently abstract virtual". I look forward to see it.
Gudge
thinks it's better to expose synthetic xml:base attribute as first one in order to solve access-by-index problem. Sounds convincing. I actually didn't implement index-based access yet, but only access by navigational methods MoveToFirstAttribute()/MoveToNextAttribute()/MoveToAttribute(). Last one is obvious, and in first and second ones my logic was as follows - when core method call returns false, I treat it as there-is-no-more-attributes and switch the state machine to exposing synthetic xml:base attribute, so it's always latest one.
But I wasn't clear about my main concern in this topic - in fact xml:base attribute might not be
synthetic if a top-level included element has already xml:base attribute. In this case according to XInclude spec its value should be replaced hence in GetAttribute(int index)/this[int index] method if index is existing xml:base attribute's index, another value should be returned, so the question is how to find out existing xml:base attribute's index without resorting to interim attribute collection.
Gudge is mediatating on exposing synthetic attributes in XmlReader.
Here are some details on how I've implemented synthetic xml:base attribute in the XIncludingReader. List of members implementing the logic:
MoveToAttribute(), MoveToFirstAttribute(), MoveToNextAttribute(), ReadAttributeValue(), HasValue, IsDefault, Name, LocalName, NamespaceURI, NodeType, Prefix, QuoteChar, MoveToNextAttribute(), ReadAttributeValue(), Value, ReadInnerXml(), ReadOuterXml(), ReadString(), AttributeCount, GetAttribute().
It's 20 (+ overloaded ones), yeah, in SAX it's much easier, but anyway that's not a rocket engineering - it's only 2-3 lines in each member after all. I wonder if in V2 XML API something would be changed, they say they are working on improving the piping also.
Another point - I'm exposing xml:base on the fly, as last attribute (as Gudge has properly supposed), but this approach doesn't help with GetAttribute(int)/MoveToAttribute(int) methods, probably I have to collect all existing attributes to some collection once and operate only on it afterwards.
I've released first alpha version of XInclude.NET library today. Once I got xml:base working and meekly passed through XInclude Conformance Test Suite with almost no fails I decided to release this stuff. There is still plenty room for optimizations and XPointer is still not supported, anyway I like "release early/often" motto. So enjoy and file bugs :).
Exposing a virtual xml:base attribute in XmlReader was really a showstopper. I solved it by introducing simple state machine and fiddling with it in MoveToNextAttribute(), ReadAttributeValue() and other attribute-related methods.
So, XPointer is now the agenda. I still believe it's possible to avoid using
XmlDocument's or XPathDocument facilities, because that assumes loading the whole document into memory. element() schema and shorthand pointer both should be implementable in a forward-only manner, the only problem here is how to determine ID-typed attributes, which would require reading DTD or even schema. Hmmm, well, will see.
According to GotDotNet download statistics my MultiXmlTextWriter class has been downloaded 398 times, while last version of nxslt utility, which includes it to support multioutput XSLT - only 91. Hmm, looks like people prefer a component to build own solutions rather than old-fashioned versatile command line tool (not really prominent observation, huh?).
My article about getting multple outputs in XSLT under .NET I was talking about probably will be published in June. That's my first authoring experience. Day-to-daily I write code documentation and specifications , but never an article, so I'm kinda worrying about it.
It's Passover holiday week in Israel now so I have the whole week free to be devoted to interesting things (well, almost free, I have also to study BizTalk until the end of the month) . So let's get back to Xinclude.NET project.
Don Box's Spoutlet:
In the interest of generality, Simon asks if there is an XmlReader implementation that traverses an XPathNavigator.
Such implementation seems to be trivial, but an interesting point is that such XPathNavigatorReader could easily give xpath1() XPointer schema support for our XInclude.NET project!
And if I'm right in my assumption that element() XPointer schema can be translated to xpath1() schema on the fly by changing any NCName to id(NCName) and any numbers to *[number], this will give us also element() schema support with almost no efforts.
I'll elaborate it further on XInclude.NET Message Board.
Kirk Allen Evans's asking hard questions:
Should there be an XIncludeNodeList implementation that is the product of the merged Infosets? Or is this irrelevant since it would only apply to a fully-loaded DOM instance, which should already have been loaded?
I believe XInclude should keep low level of XML processing - just after XML parsing, before (or optionally after) validation, and surely before DOM or XPathDocument building and XSL transformations. This way it can stay simple and transparent.
Should loading an XIncludeDocument be in any part asynchronous?
Hmmm. What if we just feed XmlDocument through XIncludeReader to preserve XmlDocument own async loading logic?
Should the first version of XIncludeReader support XPointer? If so, to what degree? Should we only support the XPointer elements() scheme?
Well, XInclude rec requires (must level) support for XPointer Framework (probably it's about shorthand pointer) and element() schema. But I'm not sure about the very first version. Many other XInclude implementations don't support XPointer, so it's not a problem to omit it for a while. But certanly we have to take into account XPointer processing in XInclude.NET even in the first version.
I guess this all boils down to answering "How complete should revision 1 be?"
Yeah, that's the key question. Well, I personally have no idea, probably no support for XPointer at all should be our first milestone, why not?
Rambling in the blog space, found Alexis Smirnov's blog and there a link to quite interesting article named "Xslt Transformations and Xslt Intellesense within the .NET IDE" by Doug Doedens. It's about how to make XSLT authoring easier and more convenient in Visual Studio.NET.
That sounds similar to what I've been thinking about last few days - to implement for VS.NET all that XSLT-related authoring features I used to in XML Spy. Apart from adding XSLT schema to allow IntelleSense drop down hints, which is trivial as described in this article, I plan (but when?) to build VS.NET addin to allow one-click/one-key transformations with support for xml-stylesheet processing instruction. I've got some prototype already, hope it'll grow to something useful.
I have released nxslt version 1.1.
nxslt is .NET XSLT command line utility, written in C#.
Timings are now more accurate, I'm using System.Diagnostics.PerformanceCounter class now.
Two new features: custom URI resolver and multiple output.
First one is trivial - it's now possible to provide a resolver class name to resolve URI's in xsl:include, xsl:import elements and document() function. So basically nxslt is ready for
XML Catalogs, lets just wait till any .NET implementation appears. Actually I have considered to implement it, but decided XInclude.NET project looks more interesting to me at the moment.
Multiple output - using partially supported exsl:document extension element it's now possible to create multiple result documents in one transformation run. Extremely powerful stuff, believe me. I would like not to unveil implementation details though (haha, it's open source) as I'm going to publish an article about it.
So enjoy. nxslt can be used in command line or integrated into IDE, such as XML Spy or Xselerator. btw, wouldn't it be nice to intergate it with VS.NET? I imagine one-click XSLT transformation inside VS.NET XML editor for instance.
Dino Esposito has published a quite comprehensive article, named
Real-World XML: Manipulate XML Data Easily with Integrated Readers and Writers in the .NET Framework in May MSDN mag issue.
While reading the article two things caught my eye - usual negation of SAX usefulness and another API quirk, which should be remembered.
-
Being particlularly fan of XML pull processing I nevertheless don't understand why one may completely deny usefulness of push processing. I like both push and pull, why to limit myself to only one? Pull is good when application knows what it wants to pull out, and push is good for generic rule based processing.
"All the functions of a SAX parser can be implemented easily and more effectively by using an XML reader."
I'm still not convinced, in next version of XmlReader API may be, but not now. Consider MSDN example of attributes to elements convertor, based on XmlTextReader. Hmm, state machinery, 4 overrided members... And here is SAX version:
import org.xml.sax.*;
import org.xml.sax.helpers.*;
public class Attrs2ElementsFilter extends XMLFilterImpl {
public void startElement(String namespaceURI, String localName,
String qualifiedName, Attributes atts) throws SAXException {
AttributesImpl newAttributes = new AttributesImpl();
super.startElement(namespaceURI, localName,
qualifiedName, newAttributes);
for (int i = 0; i < atts.getLength(); i++) {
super.startElement("", atts.getLocalName(i),
atts.getQName(i), newAttributes);
super.characters(atts.getValue(i).toCharArray(), 0,
atts.getValue(i).length());
super.endElement("", atts.getLocalName(i),
atts.getQName(i));
}
}
}
As for me, SAX won in this particular task.
-
Quirky one, need-to-be-remembered. (Sure they will change it in the V2 API).
While the API allows XmlReader as argument to XmlValidatingReader constructor, it must be XmlTextReader.
Note that although the signature of one of the XmlValidatingReader constructors refers generically to an XmlReader class as the underlying reader, that reader can only be an instance of the XmlTextReader class or a class which derives from it. This means that you cannot use any class which happens to inherit from XmlReader (such as a custom XML reader). Internally, the XmlValidatingReader class assumes that the underlying reader is an XmlTextReader object and specifically casts the input reader to XmlTextReader. If you use XmlNodeReader or a custom reader class, you will not get any error at compile time but an exception will be thrown at run time.
|
|
Recent Comments