Friday, February 25, 2011

add a dropdownlist in grid dynamically and raises event in VB


Insert the DataGrid onto the DEFAULT.ASPX screen

In my example, I am displaying two columns, the first will hold the element name, the second will hold the instance value. At this point, we will leave the grid empty. In other instances, you may wish to populate the screen at design time with controls which are data independent.
Collapse

<asp:datagrid BorderWidth="1" id="dg1"
style="Z-INDEX: 101; LEFT: 8px; POSITION: absolute; TOP: 8px"
runat="server" AutoGenerateColumns="False" ShowHeader="False"
BorderStyle="None" GridLines="Both">
<Columns>
<asp:TemplateColumn>
<ItemTemplate>
</ItemTemplate>
</asp:TemplateColumn>
<asp:TemplateColumn>
<ItemTemplate>
</ItemTemplate>
</asp:TemplateColumn>
</Columns>
</asp:datagrid>

Step 3 ? Bind the data to the DataGrid

It is important to bind the control even though we do not have any data elements within the control, to raise a binding event for each line of the DataSet.
Collapse

dg1.DataSource = myDS
dg1.DataMember = "myDetails"
DataBind()

Step 4 - Inserting the controls into the DataGrid

The trick to dynamically inserting the controls is to use the ItemDataBound event associated with the onscreen DataGrid control. The ItemDataBound event will be called for each row of data in the DataSet. When each row event is raised, the appropriate control is created and inserted into the DataGrid. This example uses the e.Item.DataSetIndex to link the DataGrid rows and the DataSource rows.
Collapse

Private Sub dg1_ItemDataBound(ByVal sender As Object, _
ByVal e As System.Web.UI.WebControls.DataGridItemEventArgs) _
Handles dg1.ItemDataBound

'ensure we are looking at an item being bound

If e.Item.ItemType = ListItemType.Item Or e.Item.ItemType = _
ListItemType.AlternatingItem Then
'always insert the dataelement name into the first column as a label.

Dim myLBL As New Label
myLBL.EnableViewState = True
myLBL.Text = dgTable(dg1).Rows(e.Item.DataSetIndex)(0)
e.Item.Cells(0).Controls.Add(myLBL)

If dgTable(dg1).Rows(e.Item.DataSetIndex)(2) = "txtbox" Then
'insert textbox and load with data

Dim myTB As New TextBox
If Not IsPostBack Then
myTB.Text = dgTable(dg1).Rows(e.Item.DataSetIndex)(1)
End If
e.Item.Cells(1).Controls.Add(myTB)
End If
If dgTable(dg1).Rows(e.Item.DataSetIndex)(2) = "checkbox" Then
'insert checkbox and load with data

Dim myCB As New CheckBox
If Not IsPostBack Then
If dgTable(dg1).Rows(e.Item.DataSetIndex)(1).tolower = "t" Then
myCB.Checked = True
Else
myCB.Checked = False
End If
End If
e.Item.Cells(1).Controls.Add(myCB)
End If
If dgTable(dg1).Rows(e.Item.DataSetIndex)(2) = "ddl" Then
'insert dropdownlist and load with data

Dim myDDL As New DropDownList
'for simplicity I will assume the ?CarMake? table

'always as the dropdownlist data source


For yy As Int16 = 0 To _
dg1.DataSource.Tables("CarMake").Rows.Count - 1
myDDL.Items.Add(dg1.DataSource.Tables("CarMake").Rows(yy)(0))
Next
e.Item.Cells(1).Controls.Add(myDDL)

If Not IsPostBack Then
myDDL.SelectedIndex = _
CType(dgTable(dg1).Rows(e.Item.DataSetIndex)(1), Int16)
End If
End If
End If
End Sub

'This function returns the table bound to the datagrid

Private Function dgTable(ByVal dg As DataGrid) As DataTable
Return dg1.DataSource.tables(dg1.DataMember)
End Function

How To Call Server Side Code From AJAX client-side

In the AutoCompleteExtender OnClientItemSelected property add the JavaScript method to be called: OnClientItemSelected=”AutoCompletedClientItemSelected”
Within the AutoCompletedClientItemSelected JavaScript method issue a __doPostBack(“AutoCompleteExtender”, sender._element.value); which will induce the postback to the server.
01<script language="javascript" type="text/javascript">
02 // postBack handler for AJAX AutoComplete Extender - onClientItemSelected
03    function AutoCompletedClientItemSelected(sender, e) {
04        __doPostBack("AutoCompleteExtender", sender._element.value);
05    }    
06 
07</script>
08 
09        <div id="find">
10            <div id="findbox">
11                <span class="label">JHA Name</span>
12                <asp:TextBox ID="tbJHAName" CssClass="namebox" runat="server" AutoComplete="Off"
13                    Width="550px" Style="margin-top: 0px"></asp:TextBox>
14            </div>
15            <asp:AutoCompleteExtender ID="tbJHAName_AutoCompleteExtender" runat="server" Enabled="true"
16                ServicePath="" ServiceMethod="GetJHANames" TargetControlID="tbJHAName" MinimumPrefixLength="3"
17                UseContextKey="True" OnClientItemSelected="AutoCompletedClientItemSelected">
18            </asp:AutoCompleteExtender>
19            <a id="linkAdvancedSearch" class="advancedlink" href="Javascript:void(0)" onclick="ShowAdvanced(this);">
20                Advanced Search</a>
21        </div>
Code Behind Page:
Filter the EventTarget message sent from the JavaScript method in the Page_Load event.
01protected void Page_Load(object sender, EventArgs e)
02{
03if (IsPostBack)
04{
05             // postBack handler for AJAX AutoComplete Extender - JavaScript call: AutoCompletedClientItemSelected
06                if (Request.Form["__EVENTTARGET"] != null &&
07                Request.Form["__EVENTTARGET"] == "AutoCompleteExtender" &&
08                Request.Form["__EVENTARGUMENT"] != null)
09                {
10                    // Emulate button click search
11                    btnSearch_Click(null, null);
12 
13                }
14}

Customizing the display for AutoCompleteExtender

The AutoCompleteExtender that came with the Ajax Control Toolkit is great for linear data, but what happens when you want to customize its output, e.g say you want to display the multiple values in a table form?
This is definately possible since at the end of the day, the UI is rendered using Javascript, however you will need to do some tweaking.
WebService
In order to return multiple values, your webservice needs to return an object[] array, or more specifically, a Pair<> array, this is because the client side javascript reading the webservice functions such that if a pair is returned, it will set the whole pair object as the “value” for the item.
[WebMethod]
public object[] GetCompletionList(string prefixText, int count)
{
//demo code
if (count == 0)
{
count = 10;
}
if (prefixText.Equals(“xyz”))
{
return new string[0];
}
Random random = new Random();
var items = new List
>(count);
var jss = new System.Web.Script.Serialization.JavaScriptSerializer();
for (int i = 0; i < count; i++)
{
char c1 = (char)random.Next(65, 90);
char c2 = (char)random.Next(97, 122);
char c3 = (char)random.Next(97, 122);
int id = i;
int age = random.Next(18, 70);
var item = new Pair(id.ToString(), new object[]{prefixText + c1 + c2 +c3, age});
items.Add(item);
}
return items.ToArray();
}
Now that the webservice is returning the data, lets move on to the client side javascripts. What we need to do is to “override” the onClientItemSelected and onClientShowing javascripts
The below example assumes a HTML table display format.
function OnClientItemSelected(behaviour, args) {
var element = behaviour.get_element();
var control = element.control;
//args._value is the pair object which is returned
if (control && control.set_text)
control.set_text(args._value.First);
else
element.value = args._value.First;
}
function OnClientShowing(behaviour, e) {
var ResultsDiv = behaviour.get_completionList();
for (var i = 0; i < ResultsDiv.childNodes.length; i++) {
var item = ResultsDiv.childNodes[i];
//since the pair object stores the values to display in the second variable, lets get it out to do our own processing
var vals = item._value.Second;
 //remove all child nodes
while (item.childNodes.length > 0)
item.removeChild(item.firstChild);
//create the table
var tbl = document.createElement(“table”);
tbl.setAttribute(“width”, “100%”);
tbl.setAttribute(“border”, “1″);
tbl.setAttribute(“style”, “border: solid 1px black;border-collapse: collapse;”);
tbl._value = item._value;
var tbody = document.createElement(“tbody”);
var tr = document.createElement(“tr”);
tr.setAttribute(“valign”, “top”);
tr._value = item._value;
for (var j = 0; j < vals.length; j++) {
CreateTableCell(tr, vals[j], item._value);
}
tbody.appendChild(tr);
tbl.appendChild(tbody);
item.appendChild(tbl);
}
}
function CreateTableCell(tr, str, value, width) {
td = document.createElement(“td”);
//this is a very important step, if you forget to set the td._value, when onClientSelected triggers, args._value will be null
td._value = value;
td.setAttribute(“width”, width);
td.innerHTML = str;
tr.appendChild(td);
}

Wednesday, February 23, 2011

XML/SOAP requests and forward them on to the underlying view manager component to produce an XML request

1. Introduction
This white paper shares vision and challenges in testing .NET applications. Never before, has any technology or framework tried bringing many disparate systems and languages under one roof for the benefit of enterprise applications. But with the advent of the .NET architecture we have systems like Visual Basic, COM, Web Services with technologies like XML and SOAP (Simple Object Application Protocol), servers like SQL, BizTalk, etc, delivering enterprise solutions has become easy.

We will examine .NET from several perspectives:
  1. First, we will look at how a web-based system needs to support different application views for internal employees, trusted partners, and external customers. 
  2. Next, we examine the architecture that is required to support the system. The architecture consists of different logical tiers. We will see how .NET tiers interact with each other, and how information flows between them. 
  3. Then, we look at the implications that .NET has for all kinds of testing, from both a functional and performance angle. We will consider what it takes to successfully deploy this type of application.
.NET arrived at a time when systems designers were facing incredible technical and business challenges. Since consumers and then businesses had adopted the Internet as a means of application deployment, the very essence of 'what a business system was' changed forever. Designers have learned the hard way that it is relatively easy to build a web page that presents customer details but - it is much more difficult to integrate all the behind the scenes 'plumbing' that gets that information from dissimilar data sources. This requires design at both the point of delivery - the browser, and at an infrastructure level. Here we could be dealing with many different platforms, operating systems and application technologies. Successful e-Business deployment - is fundamentally an integration issue.
2. .NET architecture for applications
2.1 Application Tiers
Since the beginning of IT, systems designers have faced a number of common problems. The four main areas that any design must address are
  • Display or render information according to context and the capabilities of the 'Device'.
  • Ensure that Business Model is properly implemented in the systems.
  • Retrieve the required information from the data source.
  • Ensure the appropriate levels of confidentiality, Integrity and Access are supported.
The main technical requirements for a .NET application are no different to other IT systems. The difference is in degree and context. Historically applications had a single device in mind (e.g. dumb terminal or PC). With .NET, we can support the PC based browser, different devices like PDA's, mobile phones and classic Windows applications too. From a design perspective we need to consider how to logically insulate the presentation by using techniques like display transformation, possibly based on XML/XSL. From a deployment viewpoint, we need to recognize that compatibility testing is a much broader issue. The four layers affect the user community differently too. In an intranet, we may have absolute control over the environment, so we can use DHTML to deliver a very rich presentation layer. We cannot make the same assumption about extranet or Internet users even though they may want access to the same functionality or business logic (e.g. account lookups). .NET makes it a lot easier to meet these challenges compared to historical approaches to Internet or Client Server development. It provides a framework that addresses both user and technical needs.
Before .NET existed, there were a number of approaches to building web-based applications. Microsoft provided various development tools and servers that delivered a very useful framework for the design. Microsoft's key technologies included IIS (Internet Information Server), SQL Server, Commerce Server, and Index Server as well as development tools like Visual Studio, InterDev and even Front Page. MTS (Microsoft Transaction Server) and MSMQ (Message Queue) delivered high levels of performance for app servers. ASP's (Active Server Pages) provided the focal point for web page development for dynamic content. The ASP framework provides a conduit between the client browser and the back-end servers. It allowed developers to control many of the key functions of the application (e.g. presentation layer and data access), although ASP's were arguably less capable when it came to separating out the business logic.
An ASP is a dynamic process that runs on the web server. It serves up our web page for the browser to display. It performs a variety of tasks in conjunction with the IIS web server, security model and our underlying data sources before it sends the page to the browser. The client-side browser may also perform some processing although this is typically presentation related. ASP and IIS maintain various objects that allow the client and server to communicate with each other or maintain context (e.g. Request/Response). This model throws some questions to the designers. Do we do everything on the server (which makes it easier to manage) or do we let the client do something (which is better for performance). In reality, we need to do both.

Moving Towards .Net
The older ASP model had some limitations. Developers found it difficult to code an elegant, modular solution. An Active Server Page was a combination of static HTML and programming via one or more scripting languages. Just to display information required large amounts of code, not allowing for database access or calls to business services. Active Server Pages did not have an access path to business services as such, so this meant that our presentation code was tightly bound to data access logic. This could create some considerable code maintenance problems. In addition, the application could be inflexible or have problems in scaling. Developing business services in COM/DCOM ([Distributed] Common Object Model]) overcame some of the issues. However, COM skills were difficult to master and even then, components could not interact directly or run on non-Microsoft platforms.
There was a clear need to address the following development issues:
a) Have a better Application Model built on common Internet standards E.g. HTTP, XML, SOAP.
b) Separate presentation, business logic & data access E.g. Introduce Web Services as identifiable & re-useable components.
c) Bring Visual Basic style productivity to web forms design.
d) Preserve existing developments (ASP and Windows) and move forward to a unified approach. .NET does this through a number of tools, products, servers and the CLR (Common Language Runtime) that make up the complete solution.

.NET introduces new servers like BizTalk that provide high-performance, secure document or data exchange.
XML & SOAP
.NET uses XML & SOAP messages as the communications "glue" between tiers to provide a flexible, accessible, standards-based approach to communications. Their extensive use in the .NET architecture allows designers to build Web Services that run on any platform.
Web Services
Because applications are now collaborative or outward facing, we need a mechanism to expose common business logic. The logic may sit inside our intranet (e.g. lookup employee details) or outside (e.g. perform a credit score via external agency). A Web Service provides a standard way of implementing both requests and can present its interface in an open, public format. Adding new functionality to an application should be simpler in a Web Services-led environment, because of this location independent approach.

3. What does this mean for developers & testing?
3.1 A sample .NET environment
As far as developers are concerned, .NET should make their life a lot easier. Consider the typical infrastructure required to support a serious web-based application. Developers no longer have to write monolithic code inside a single unit or Active Server Page. For example using Visual Studio.NET the presentation logic is now much more compact. New style ASPX pages support the concept of 'code behind'. This means that business logic is separate, but .NET ensures that the two elements communicate effectively. In turn, the business (Web) service may format an XML message that triggers a request in database server. That request may execute as Stored Procedure, which finally accesses the data. Information passes back up the chain until the ASPX renders the information in whatever format our device can handle (HTML, WML etc). Although a .NET application looks identical to existing web-based systems, the .NET environment has some unique characteristics that can impact test management and test development.
3.2 Methodology for .NET testing
Testing .NET requires a methodical approach which draws together test DESIGN, test DATA, PROCESSES, RESOURCES (technical and people) and ANALYSIS tools.
3.2.1 Understanding the development environment/cycle
Testing shouldn't be a separate activity that occurs in isolation from development. However, the connection between them is easily broken. We have shown that .NET has significantly improved solution development in a number of areas such as productivity, separation and cross platform deployment. .NET provides a framework for developing conventional Windows applications too. The array of languages, databases, interfaces and operating systems that make up our complete solution, end-to-end, is very broad. Each development decision may have an impact on our test management and execution (e.g. choosing Stored Procedures as our database access method vs. SQL requests). A strong understanding of .NET, the underlying technical architectures and the development environment provides a much better foundation for meeting and then executing your test strategy objectives.
3.2.2 Approach for Web Services (application components)?
Web Services can exist inside or outside of an organization. Therefore, we need to understand the parameters for acceptable performance. An Intranet service may have a smaller, known user base to contend with, but perhaps a high hit rate if the service is widely used. A service designed for external use (e.g. Company.Customer.CreditCheck) has a much broader usage base and has a very different security and performance profile.
Having understood the performance parameters we also need to understand more about the 'plumbing' that .NET provides. In fact, .NET supports both HTTP and SOAP requests. So, in a performance test we need to know which communication type our clients are using so that we can simulate that load on our system correctly.
Web Services are effectively decoupled from the presentation of data. Therefore, they have no UI (User Interface) of their own. An environment that includes Web Services has to focus on the integrity of those services from both a unit and integration viewpoint.
3.2.3 Points of failure
A typical .NET deployment consists of many servers and services. Each layer or tier is capable of providing monitoring hooks through the CLR. This allows experienced consultants to identify the bottlenecks and performance behavior of components in the .NET system.
Testing .NET requires a deep understanding of the end-to-end architecture. Here are some of the common test activities
  • Unit Testing: The CLR (Common Language Runtime) in .NET allows programs written in any language to co-exist. .NET exposes interfaces between components in a standard way. It is possible to generate test utilities that 'discover' the properties (parameters) and methods (function calls) supported by a component. The utilities can then generate per-unit test harnesses automatically. This speeds up unit testing considerably.
  • Integration Testing: How do we test the system end to end? Given the increasingly dissimilar technologies involved, it is likely that system components will be developed separately. How do we simulate some components (e.g. stub our Web Services) before they exist in reality? As with unit testing we can develop integrated test harnesses. By adding business logic to these test components, units can be linked together and even tested ahead of development (this is known as extreme coding).
  • Functional Testing: The functionality of new style Web Services can be tested using a black box approach. This means sending the relevant HTTP or XML/SOAP requests and checking that we get the right response back from the Web Service. We also need to test the format of that response (e.g. is our response returned as an XML data?) Remember that a Web Service can be written on a variety of platforms not just Microsoft, because it uses common standards like HTTP/HTTPS, XML and SOAP to communicate. The test environment may need to cross system (internet/extranet) borders. Have we tested the corresponding publish & discovery interfaces of the Web Service? Have we tested that the Web Service requests result in the relevant changes to the database?
  • Stress Testing: One of the key factors for success is, understanding whom the clients are both technically and from a business perspective. We need to look to automated tools to assist, because the stress test cannot be conducted manually, The automated environment needs to recreate the client communication (via HTTP, SOAP or LAN) so that we stress the correct execution paths through the system. The simulated clients must also provide the right business scenarios so that we create the mix (enquiries vs. updates) and the load (peak times vs. quiet times) required.
  • Compatibility Testing: How do we test the delivery of information against a number of dissimilar devices? Have we used the new style mobile controls supported by .NET appropriately? Can we check that the underlying core data returned by application was correct (XML) but that the transformation into the target device's presentation language (e.g. HTML or WML) was incorrect?
4. Approach
4.1 End To End View
Any web-based application, whether or not it uses .NET technology, is rather like an iceberg. Only a small proportion of the supporting infrastructure is actually visible to an end user. The great innovation behind the web was that it turned our processes 'inside-out' to face customers or partners. The potential downside is that the applications (reputation) are only as good as the weakest link in the chain.
4.1.1 Sample .NET application - account maintenance
The web application itself may be doing nothing more than provide account snapshots. The web page server/content resides at the ISP. We will assume the customer has dedicated 2 x 10Mb links from their ISP to connect to their own .NET application server. It utilizes Web Services to provide a variety of account lookup functions (by account id or by customer surname, postcode and house number). The client is experiencing a performance problem when the number of users on the web site is > 200, but they don't know where the cause of the problem lies.
4.1.2 Sample .NET application - the case for end-to-end testing
For this scenario the need for end-to-end testing to flush out potential hot spots in the overall infrastructure is necessary. In our sample environment, it is all too easy for various vendors (the ISP, the Content Management application, the Database Vendor) to say 'it's not me' and even to show some component stats that prove the database can 'handle 200 users'. Our suggested approach in this case might be that, at first, we look at the communications bandwidth and the interfaces between layers. By performing a series of appropriate performance tests and monitoring the behavior of the system from an end-to-end perspective, it is possible to see the big picture.
4.1.3 Sample .NET application - monitoring & statistics
The performance test may replicate real users (web-interface) at the front-end, and introduce a mix of HTTP, SOAP/XML transactions. By collecting relevant statistics from network hardware, database servers, and CPU/Memory/Swap information from the .NET CLR, we should begin to see which units are working to or beyond their capacity.
4.1.4 Sample .NET application - conclusion
In this scenario, we have highlighted just some of the potential issues that may occur in an integrated environment and an approach to solving them. To take a final example, the SOAP protocol used to drive our account lookups is probably a simple, yet slightly effusive interface to decode. If the connections between our web server and the application server are reusable, we get better performance from our network bandwidth. However, the 'system' automatically adds additional security/authorization information, if the connections are not reusable. This means that performance overall, for the same transaction load is worse because of the side effect of the security model. The system needs to send extra data over the network to accomplish the same work. What this shows is that there is a need to have a complete view of the environment. To make sense of your tests and the results requires an understanding of the overall infrastructure and the complex inter-dependencies between systems.
4.2 Testing to be easier in a .NET environment?
Many elements of testing should be simplified because the underlying architecture behind .NET is much more flexible than the ASP-based model. We have seen the launch of new, and improved, third party tools that support the .NET environment. For example, they will support the new .NET Web Controls. They will also focus on interaction with Web Services and the interrogation of component interfaces. In addition, many of the Visual Studio development tools provide 'out of the box' functionality that improves testing and debugging of .NET applications. The new CLR provides many detailed hooks that aid performance monitoring. IS Integration also expects that there will be a need to adapt existing tools to the .NET framework, by generating test harnesses that 'understand' .NET components. It will also require specialist skills to establish the appropriate HTTP, XML/SOAP interfaces required for a performance or stress test environment. .NET highlights the need for an integrated, end-to-end view of testing.
5. Tools for testing in .NET environment
Some of the tools that are available that would help us in testing .NET applications are
  • FxCop FxCop is a code analysis tool that checks your .NET assemblies for conformance to the .NET Framework Design Guidelines.
  • NUnit A simple testing framework for .Net languages. Implemented in C#, applies to all .Net languages. Comes with a text mode for inclusion in automated builds, plus a GUI browser. Open Source.
  • TraceView TraceView is a debug utility that captures debug messages from shared memory DBWIN_BUFFER. It provides features such as trace selected processes; apply custom filters at runtime, level of tracing, log messages to log files, and persistent user settings.
  • MATC (Microsoft Application Test Center) Application Center Test is designed to stress test Web servers and analyzes performance and scalability problems with Web applications, including Active Server Pages (ASP) and the components they use. Application Center Test simulates a large group of users by opening multiple connections to the server and rapidly sending HTTP requests. Application Center Test supports several different authentication schemes and the SSL protocol, making it ideal for testing personalized and secure sites. Although long-duration and high-load stress testing is Application Center Test's main purpose, the programmable dynamic tests will also be useful for functional testing. Application Center Test is compatible with all Web servers and Web applications that adhere to the HTTP protocol.

Web Application Framework


Abstract
The Web Application Framework is an aim to provide an easy to use tool set to develop scalable Web based applications that are easy to develop and maintain.
The approach has been to place function before form and to enable developers to build all the required functionality of an application and then customise the front end using simple templates. Applications can be built from small simple components providing direct access to database functions and/or providing output routines. Simple components can be linked together to produce complex behavior required of an application.

Introduction

The web application framework has been designed to provide an extensible framework for rapid web application development. The aim has been to provide a means of developing applications by placing function before form to enable the rapid deployment of workable solutions in the quickest time possible. The developed application can then be tweaked using an output templating layer to create the desired appearance once the required features have been implemented.
Many of the current systems for developing web based applications (e.g. ASP, JSP and other web scripting languages) are based on a two tier architecture where a back end database is accessed for information by a server processed script which then presents the results to the end user. Two tier architectures tend to mix form and function requiring developers to code the business logic and display logic concurrently. This mix also makes maintenance of the code rather more difficult as changes to form can affect function and vice versa. In an attempt to address this concern three tier architectures (e.g. EJBs) have been developed placing the business logic in an abstract container and then allowing the display logic to be developed separately and call functions of the intermediate tier. The added code complexity of three tier architectures often extends the development time and costs for a web application. The intermediate tier will also require some container application to hold the business logic adding additional cost and complexity to the deployed application. Three tier architectures also tend to require components to be developed in a high level language requiring a further investment in skills over simple scripting languages (which are still required for the display layer).
The web application framework enables developers to build applications out of simple components and then use a simple template language to produce the display. The aim has been to take the best aspects of two and three tier architectures and then reduce the complexity of application development as much as possible whilst retaining flexibility. Based on a two tier architecture (web server front end and database at the back end) the business logic and display logic have been clearly separated in the first tier (web server). Although other applications have been developed to provide this separation in the form of display template routines they can still only be accessed through the scripting language. This is not enforced in the scripting languages so developers can (and probably do) sidestep the separation. In the web application framework the separation is performed by the framework and can't be bypassed providing a strict environment to develop applications.

Features

Enhanced Two Tier Architecture. Providing a clear separation between the data layer and the interaction layer without the added complexity of three tier systems.
Connection Pooling. Enabling rapid response to requests by reducing the need to renegotiate connections with the database. In multi site installations this can also prevent high levels of activity on one site from depriving other sites of access.
Multiple database configurations. The framework supports multiple database configurations allowing one consistent application front end to access many different back-end databases.
Simple component architecture. Allows the development and debugging of applications one step at a time and facilitates easy customization of applications to end-user requirements post installation.
Simple development. Development of applications has been simplified by removing the need for any additional Java development. Entire applications can be created through the database configuration and creation of simple SQL templates to interact with the database.
Complex Behavior possible. Simple components can be chained together to produce complex behavior even interacting with different databases at each stage.
Configuration Database. All of the configuration is held in a database providing centralised configuration management. Database replication enables configuration to be maintained centrally and propagated to many sites transparently.
Web based interface. The View Manager Application provides a Web interface which can be used to develop and maintain applications remotely.
Wide range of supported databases. The Web application framework leverages on standard JDBC drivers providing access to a wide range of backend databases.
Portable Java Servlets. The framework has been built using Java Servlets which can be installed in many off the shelf web servers without the need for complex and expensive application servers.
Pull down Menus. Values in the database can be easily referenced to create pull down menus to ease user interaction.
Simple Templating system.  Someone with basic knowledge of HTML can easily add the required additional tags into their code to create customised output without affecting the underlying business logic.
Abstracted Security. The access control is handled by the framework and therefore does not rely on any access control to be present in web server on which the system is to be deployed.
View Level Access Control. Users can be granted access only to specific views based on the access groups which they are members of.
Record Level Access. User and group id details can be used to restrict access to specific records within a view providing even greater control on who can access/modify data.
Plug-in style extensibility. Access to alternative datasources can be easily implimented via well documented plug-in interface.
Range of available plug-ins. Plug-ins to access JMS Queues, JMS Topics and XML Files are supplied with the application.

Technical

The Design has been broken down into three Layers: Data Interaction, Component Linking and Display. This strict delineation allows development of each layer to be focused on separately and if desired by separate developers.

Data Interaction Layer

The data interaction layer can prepare a number of different view types to process and display. The simplest is the input view which simply specified fields to be used in a form to get information from the user. A rudimentary view type for editing text files on the web server is also available. The most utilised view types are results and update views which both use SQL templates to either update the database or select records from the database.
The data interaction layer utilises a template processor to parse an SQL template. Special tags similar to HTML markup are substituted for required values in the SQL and then this is run on the database server. The templates allow for default values to be placed into the SQL output when no data is input by the user. An example of an SQL template is Example 1. Each %prop tag specifies the name of a variable from the form data posted to the web server the default attribute specifies what value should be substituted in the absence of input. The suffix and prefix attributes specify additional text that should be placed before and after the substituted value. If no value is present and no default is specified then nothing is substituted.

Example 1. Example SQL template.
select * from announcement
 where
 title like <%prop name="title" prefix="'" suffix="%'"
 default="">
 <%prop name="announcementid" prefix="and announcementid="%>
 order by posted desc
      

Component Linking

There are three ways in which views can be linked to other view providing a flexible means to construct an application. Views can be linked either by a standard link, an in-line link or a chained link and there is no real limit to the number or combination of these link types.
Standard Link. Standard links provide a means to submit data collected or displayed on one view to another view. The most common example is to use a standard link to link to a view which updates that record. These links generate submit buttons on the HTML form.
In-line Link. In-line Links work in a similar way to standard links as data collected in one view is submitted to another view but in this case the results are displayed in the current view. A simple example of this would be to display all the groups a user is a member of when you view the users data.
Chained Link. Chained links allow for data submitted to one view to be passed on to another view once this view has finished processing. This can be used to update data in more than one database at a time, re-display a record after updating it or displaying a new record after inserting. This last example utilises the ability of update views to interrogate the database and obtain the unique record identifier of a newly inserted record. This feature can also be used to interrogate the database for other values such as the number of records remaining after a delete.

Display Layer

The display layer uses the template engine to parse simple page templates to enable easy generation of a consistent front end to your application. There are default output methods which will then present the results of your data interaction as a HTML form which can then be modified and/or used as input to another view. The behavior of the default display engine can be tweaked allowing the type of form element to be changed as required. Most form elements can be created in this way including text boxes, password fields, large text areas, hidden fields and pull down menus. Any linked views are also displayed by the default routines.The default display routines can optionally be replaced with even more customised HTML templates which can also make use of form elements generated by the default display routines. E.g. the pull down menus and any linked views. An extension to generate XML output has also been included which will optionally pass the XML output through an XSLT transform to enable even more control over the output formatting.

Example Applications

As a proof of concept and a test of the frameworks capabilities a number of example applications have been developed. These applications include a simple intranet system comprising of an announcements board, a discussion board and a company directory.

View Manager

The View Manager was developed to provide a Web interface to the Web Application Framework. All components of the framework can be created modified and deleted via a simple to use front-end including the editing of SQL and HTML templates.

Intranet System

The intranet system was developed to provide some commonly required applications that could be used to kick start the development of an integrated company intranet system. Additional applications can be added into the system to provide a unified working environment.

Project Manager

The project manager was developed as a simple project manager allowing projects to be created tasks created and resources assigned to tasks. Expenses can also be filed against projects and budgets can be calculated based on resource costs and expenses incurred by a project.
The Project manager has also been linked to the discussion board component of the intranet system enabling access to discussions related to specific projects.

Deployment

There are various possible ways of deploying the framework depending on your requirements.

Single Site Install


Figure 1. Single Site Install
Design of a Single Site Deployment
A simple single site install is the most basic deployment of the Web Application Framework (Figure 1). All user requests are handled by the web server which verifies the user, looks up the view configuration from the database server. The view configuration is used to establish which SQL templates to use. The web server then parses the SQL template and submits it to the Database. The database will then process the request and send a response back to the web server. The Web server will then process this response performing any link operations and obtaining further responses from the Database if required. Once a complete set of data is collected the data is fed back to the user.
Variations on this deployment could involve the users being based at many sites and accessing the web server using secure HTTPS communication over the internet. Client certificates could also be used if the web server supported it to prevent any unauthorised access. In order to simplify the deployment the database and web server may also reside on the same physical machine although there are some security implications for this if the web server is to be visible on the internet. There may also be more than one database server present and the web server will determine which database connection to use when it loads the configuration for the view. The databases may also be of different types i.e. Oracle, SQL Server or PostgreSQL.

Multi-Site Install


Figure 2. Multi-Site Install
Design of a Single Site Deployment
A multi-site install can be done a number of ways one of which is outlined in (Figure 2). In this case these databases may be of heterogenous nature with each web server able to communicate with the other sites databases (preferably through a VPN over the internet or using SSH port forwarding). This allows the same consistent interface to be available to all sites and for all sites to be able to access each other's data when required. As their own database is local, a site would not be reliant on high bandwidth connection to the internet for the majority of it's functions.
If all sites need access to the same data which may not need updating often then one database could be configured as the master database and the others as slaves. The framework could then be set up to read data from the local slave database and only write to the master database. The master would then replicate any updates down to the slaves. This would provide the same functionality of a single site install that could be accessed from may sites using HTTPS but local data reads should be significantly faster and the load on the master database lower.

Future Plans

The next phase of development will be to provide an additional request handling component that could process XML/SOAP requests and forward them on to the underlying view manager component to produce an XML request. Where possible compatibility with as many of these systems as possible will be maintained.

Glossary

API
Application Programmer Interface. A structured way of forcing programmers to write code that can communicate with some other code.
ASP
Application Server Pages. A web scripting system produced by Microsoft.
Client Certificates
Client Certificates. A mechanism for a user to be validated by a certificate that has been signed by a certificate authority. If a user does not have a valid certificate signed by the correct authority then access can be denied.
E-Hub
E-Hub. A large scale application server designed to handle transactions and requests for data over a large number of different resources.
EJB
Enterprise Java Beans. A set of programming APIs to enable creation of Java components that can be distributed across a network developed by by Sun Microsystems. See Also API.
HTML
Hyper-Text Markup Language. The most common way of presenting information over the Internet using simple tags to convey formating information.
HTTPS
Hyper-Text Transfer Protocol (Secure). A method of transferring data between a web server and web browser that utilises public private key encryption. See Also Public Private Key Encryption.
JDBC
Java DataBase Connectivity. An API for communicating with databases. There are a large number of database servers that have drivers which support this API. See Also API.
JSP
Java Server Pages. A web scripting system produced by Sun Microsystems.
Public Private Key Encryption
Public Private Key Encryption. Public Private Key Encryption is a method of encrypting information in such a way as to only enable one person to read the data. A public key and a private key are generated. Any data encrypted by the public key can only be decrypted by the private key and vice verse. If A wants to communicate with B then A would encrypt the data with B's public key (which any one can have) and also with A's Private key. When B receives this data he can use A's public key to verify that the data has come from A and then use B's private key to actually read the data. As no one else should have B's private key then the data should not be intercepted.
SQL
Structured Query Language. The standard query language used in the majority of database systems
SSH
Secure SHell. Secure shell is a means of encrypting specific traffic between sites using public private key encryption. It is normally used to provide an encrypted remote login to machines with sensitive data on. The port forwarding facility allows for any specific traffic intended for a machine (e.g. just database access) to be encrypted between the two sites. See Also Public Private Key Encryption.
VPN
Virtual Private Network. A system usually configured using specialised firewall routers and public private key encryption to enable sites to be linked securely over the internet. See Also Public Private Key Encryption.
XML
eXtensible Markup Language. A mechanism for marking up data usually indicating the meaning of the data.

Reading and Writing XML in .NET 2.0




"Using XmlReaderSettings, XmlReader, and the Static Create Methods"


It must be tough for companies that develop software for working with XML. No sooner do they get a product out of the door, the World Wide Web Consortium (W3C) changes the recommendations and standards so that their product is out of date. Yet the manufacturers still have to maintain backward compatibility with their previous releases, while attempting to encompass all the new standards. We've seen this several times before in Microsoft's XML product space, and the process shows little sign of stabilizing yet.

OK, so the base specification for XML itself, version 1.0, is complete, stable and implemented in almost all products now. But recent advances in technologies such as XML Query Language (XQuery - see http://www.w3.org/XML/Query) and the XML Information Set (XML InfoSet - see http://www.w3.org/TR/xml-infoset/) require changes to core classes in the System.Xml namespace with each release of the Framework, to keep up with evolving standards.

When version 1.0 of the .NET Framework was introduced, it brought with it a whole raft of new techniques for working with XML. This included a new pull-model parser, the XmlReader, new XML document objects such as XmlDocument, XmlDataDocument and XPathDocument, new classes for working with schemas, and a brand new XSL-T processor. Now, at the time of writing, version 2.0 has just appeared (this article is based on the Beta 2 release). And after the preamble above, you won't be surprised to learn that there are a great many changes in the release compared to version 1.x.

In this series of three articles, we'll look in detail at how the new features of the XmlReader and XmlWriter classes in version 2.0 of the .NET Framework can be used to read and write XML documents, and interact with the new XML document store objects. This includes:

  • The new "settings" classes and static Create methods for XmlReader and XmlWriter
  • Creating and using an XmlReader to read and validate XML documents and fragments
  • Two of the useful new features of the XmlReader class
  • Creating and using an XmlWriter to write XML documents and fragments
  • Some useful new features of the XmlWriter class
  • How the XmlReader and XmlWriter can be used with the XmlDocument class
  • Some of the useful new features of the XmlDocument class

Along the way, we'll look into the issues involved in using the new classes, the reasoning behind the changes, and how the new features simplify your code and provide better overall efficiency for your applications. This first article concentrates on the XmlReader class, and how the new XmlReaderSettings class makes it easy to create XmlReader instances with specific properties such as validation and access control for use in your applications.

The New "Settings" Classes for XmlReader and XmlWriter

To read or write XML in version 1.x, you can create an instance of a class that inherits from XmlReader or XmlWriter, such as XmlTextReader or XmlTextWriter, and then set various properties before using that reader or writer. The XmlReader and XmlWriter classes are abstract, and so you cannot create instances of them directly. And each time you need a reader or writer, you have to go through the same process of creating an instance and setting the properties.

In version 2.0, the fundamental technique for creating readers and writers has changed. There are two new classes named XmlReaderSettings and XmlWriterSettings that you use as a "factory" to generate instances of readers and writers on demand, without having to repeatedly set their properties. This has several benefits in that it:

  • Reduces the code you have to write
  • Allows the framework to make optimizations in the reader or writer based on the settings, for example omitting validation support if this is not required
  • Provides classes that can execute more efficiently in circumstances where the extra features are not required
  • Allows you to create instances of the abstract base classes, rather than having to instantiate classes that inherit from XmlReader or XmlWriter
  • Allows the XmlReader and XmlWriter to be extended in future releases without breaking your code, and therefore removes the need for multiple concrete implementations aimed at different scenarios

The version 2.0 XmlReader and XmlWriter classes expose a new Static/Shared method in version 2.0 called Create, which allows you to create instances by specifying an XmlReaderSettings or XmlWriterSettingsclass instance that defines the behaviour you want. We'll look at how this works with the XmlReaderin this article, and XmlWriterin the net article.

However, first, it's useful to see how the XmlReader and XmlWriter fit into the whole scheme of things in .NET version 2.0. Figure 1 shows the main data flows that involve the three types of XML document store and manipulation classes in System.Xml 2.0 and its subsidiary namespaces. You can see that theXmlReader and XmlWriter are a fundamental part of the flow when reading XML into, and saving it from other classes such as the document stores.

Figure 1 - How the XmlReader and XmlWriter can be used with the XML Document Stores in v2.0

Not shown here are other areas where the XmlReader and XmlWriter are used, for example when reading XML using the SQLXML technology in SQL Server via an ADO.NET Command instance, or reading and writing XML with the new XslCompiledTransform class that performs XSL-T transformations. And, of course, you can use the methods of the XmlReader and XmlWriter classes directly to read and expose nodes from an XML document, or to create new XML documents.

The XmlReaderSettings Class

The XmlReaderSettings class is used to specify the behavior you want for XmlReader instances that you will create and use in your code. Figure 2 shows a schematic overview of the XmlReaderSettings class. You can see that the set of properties available is broadly similar to that you will be used to in the version 1.xXmlReader class. You can specify a range of properties that control the way XML is handled: including ignoring white-space and processing instructions, specifying the schema validation type and conformance level, preventing DTDs from being processed, and closing the underlying input stream automatically when the reader is closed.

Figure 2 - The XmlReaderSettings Class

There are also properties that return the current line number and character offset when reading a document, and the ability to switch on and off strict checking of the characters in the input stream (for example characters that are outside the legal range for XML documents). The XmlReaderSettings class also exposes a reference to an XmlResolver that is used to safely read external schemas, DTDs and entities; plus a reference to an ICredentials collection that contains the network credentials to be presented to the server when accessing a remote document.


To resolve namespaces within the XML document, the XmlReaderSettings class also exposes a reference to an XmlNameTable. This is basically a collection of name/value pairs that specify the namespace prefixes and the corresponding namespace identifier declarations.

You can also read an XML stream that doesn't contain the <?xml version="1.0"?> declaration, and read fragments of XML that are not - on their own - valid documents. You specify the conformance level, so that the reader will accept input that is not actually a complete XML document, for example a fragment that contains un-declared namespace prefixes.


Some of the ways that you can use the XmlReaderSettings class are discussed next. We'll look at:

  • Creating an XmlReader with the XmlReaderSettings class
  • Validating XML with the XmlReaderSettings and XmlReader classes
  • Handling XML validation errors
  • Using a custom handler to trap XML validation errors and warnings
  • Reading fragments of XML with an XmlReader
  • Validating fragments of XML with an XmlReader
  • Using an XmlResolver to limit access to resources
  • Wrapping or "pipelining" XmlReader instances

The example page shown in Figure 3 demonstrates most of the features listed above. You can run or download all of the samples at our Website at http://www.daveandal.net/articles/readwritexml/. This first example, named readersettings.aspx, allows you to turn on and off validation (including using a custom validation handler and trapping validation warnings), set the conformance level for a document or a fragment, and use an XmlResolver to limit access to the XML disk file. It also demonstrates reading typed values, as you'll see later in the article. There is a [view source] link at the bottom of the page that you can use to see the source code, which is fully commented to help you understand how it all works.

Figure 3 - The Example Page that Demonstrates Using the XmlReaderSettings Class

Creating an XmlReader with the XmlReaderSettings Class

To create an XmlReader instance, you first instantiate an instance of the XmlReaderSettings class, set the properties you want, and then call the Create method of the XmlReader class. For example, this code creates an XmlReader that closes the underlying input stream when the reader is closed, ignores comments in the XML document, and reads the XML disk file named myfile.xml:

Dim rs As New XmlReaderSettings()
rs.CloseInput = True
rs.IgnoreComments = True
Dim xr As XmlReader = XmlReader.Create("C:\temp\myfile.xml", rs)

Other overloads of the Create method allow you to generate an XmlReader over a Stream, or wrap an existing TextReader or XmlReader which is then used as the input to the new XmlReader. You can also pass an XmlParserContext instance as the third parameter of the Create method, which allows you to declare the namespaces and prefixes used in the document, and specify the language and the white-space handling options that the reader will use when reading the XML. Finally, you can use the Create method without specifying an XmlReaderSettings instance if you just want to create a single instance of an XmlReader, and set the various properties of the reader directly afterwards.

The example page shown in Figure 3 provides a drop-down list where you can select from a range of XML disk files. It also declares a variable to hold an XmlParserContext instance, which is populated if you select the option to read an XML fragment instead of a complete and well-formed XML document. The XmlReader is then created using the static Create method against the XML file you select in the drop-down list:

Dim xpc As XmlParserContext = Nothing
...
' create and populate the XmlParserContext here if reading an XML fragment
...
Dim xr As XmlReader = Nothing
Dim sPath As String = Server.MapPath("data/" & lstDocument.SelectedItem.Text)
xr = XmlReader.Create(sPath, rs, xpc)

If there is an error creating the XmlReader, for example a security exception or if the XML file or stream you specify does not exist, the exception is raised when you call the Create method. Therefore you should always use a Try..Catch construct to trap any such errors.

Validating XML with the XmlReaderSettings and XmlReader Classes

One of the stranger features in version 1.x of the System.Xml implementation is that you have to use a special class, XmlValidatingReader, to validate an XML document. And you have to create this XmlValidatingReader from an existing XmlReader instance. This is because validation adds an overhead to the reader class that wastes resources if validation is not required (although the readers do check that the document is well-formed).

In version 2.0, you can validate a document directly when using an XmlReader. A range of properties on the XmlReaderSettings class allow you to specify one or more external XML schemas or DTDs using the XmlSchemaSet class (a collection of XmlSchema instances), and these are applied to the XML as it is read - depending on the settings you specify for the ValidationType and ValidationFlags property. The ValidationFlags property is combination of flag values from the XmlSchemaValidationFlags enumeration, as shown earlier in Figure 2. This enumeration contains five values:

  • None: none of the validation flags are active - this is the default
  • ProcessIdentityConstraints: all constraints specified by xs:ID, xs:IDREF, xs:key, xs:keyref, xs:unique elements in the document are processed
  • ProcessInlineSchema: any inline schema within the document is processed
  • ProcessSchemaLocation: any elements that specify external schema locations, such as xsi:schemaLocation, xsi:noNamespaceSchemaLocation, are processed
  • ReportValidationWarnings: any warnings encountered during validation are detected, and the corresponding validation events will be raised.

To enable validation in an XmlReaderSettings class, before you create the XmlReader instances you need from it, you must perform two tasks. The first is to create an XmlSchemaSet and assign the schemas that will be used for validating the XML to it (unless the XML document contains an inline schema). In the example page we use an XML document that refrences two schemas - one that defines the main elements in the document and one that defines the reviewed element with the namespace prefix "rv". This is the standard and valid XML document:

<?xml version="1.0" encoding="utf-8"?>
<root xmlns="http://myns/slidesdemo" xmlns:rv="http://myns/slidesdemo/reviewdate">
<session name="All about XML">
  <slides>
    <slide position="1">
      <title>Agenda</title>
      <rv:reviewed>2004-05-10T00:00:00</rv:reviewed>
    </slide>
    <slide position="2">
      <title>Introduction</title>
      <rv:reviewed>2003-10-22T00:00:00</rv:reviewed>
    </slide>
    <slide position="3">
      <title>Code Examples</title>
      <rv:reviewed>2004-03-02T00:00:00</rv:reviewed>
    </slide>
  </slides>
</session>
</root>

You can see the two namespace declarations in the root element, and these are used in the targetNamespace attribute of the two schemas. So we need to add both of these schemas to the XmlSchemaSet, and then assign the XmlSchemaSet to the Schema property of the XmlReaderSettings instance:

Dim ss As New XmlSchemaSet()
ss.Add("http://myns/slidesdemo", Server.MapPath("data/schema/slides.xsd"))
ss.Add("http://myns/slidesdemo/reviewdate", Server.MapPath("data/schema/slidesrev.xsd"))
rs.Schemas = ss

Then we turn on validation by setting the ValidationType and specifying the ValidationFlags we want to be active. In this case, we've specified that validation should be carried out against an XML schema, though you could use ValidationType.Auto, in which case the reader will detect which type of schema or DTD is being used:

rs.ValidationType = ValidationType.Schema
rs.ValidationFlags = (rs.ValidationFlags + XmlSchemaValidationFlags.ProcessSchemaLocation)

Handling XML Validation Errors and Warnings

Now any validation error will raise an XmlSchemaException when the XML is read. So you can handle this error to find out what happened, either when loading another object with the XmlReader (for example passing it to the Load method of an XmlDocument instance), or when reading individual nodes directly. In the example page, we've previously created a StringBuilder to hold the results of processing the XML disk file, and it can be populated with the validation error details like this:

Try
  While xr.Read()
    ' ... handle and display XML document content here ...
  End While
Catch xsx As XmlSchemaException
  ' document failed validation against schema so display details
  builder.Append("<p><b>ERROR validating XML document against schema:</b><br />")
  builder.Append("Message = " & xsx.Message & "<br />")
  builder.Append("LineNumber = " & xsx.LineNumber.ToString())
  builder.Append(" &nbsp; LinePosition = " & xsx.LinePosition.ToString() & "</p>")
  ...

Figure 4 shows the result of validating an XML document that contains invalid content. This document contains the element <slideposition="two">, which is invalid because the data type defined in the schema for this element is xs:unsignedByte. Notice that processing of the XML document stops when the error is encountered (if you do not tick the first checkbox in the page, it will read the XML without validating it and you'll be able to see the values of all the nodes).

Figure 4 - Validating a Document with an XmlReaderSettings and XmlReader Class

However, the XmlReader may also raise other types of exception when reading the XML document, for example if the file becomes unavailable or the input stream is disrupted. In this case, you should also include a generic error handler section, and remember to close the XmlReaderas well when you have finished using it:

  ...
Catch ex As Exception
  ' error reading document so display details
  builder.Append("<p><b>ERROR reading XML document:</b><br />")
  builder.Append("Message = " & ex.Message & "</p>")
Finally
  Try
    xr.Close()
  Catch
  End Try
End Try

Another approach is to use a Using construct, now available in VB.NET as well as C#, to ensure that the reader is correctly disposed when you have finished with it. You don’t have to remember to call Close in this case, though it's still good practice to do so. For example:

Using xr As XmlReader = XmlReader.Create("test.xml", rs)
  ' ... use the XmlReader here ...
  ' ... still good practice to call Close when complete ...    
End Using

Using a Custom Handler to Trap XML Validation Errors and Warnings

Trapping validation errors, as shown above, is useful, but sometimes you want to handle validation errors yourself, without having processing stop when the first one is encountered. As in version 1.x, you can add a custom handler to the ValidationEventHandler property of the XmlReader (in version 2.0, this is done via the XmlReaderSettings class), which is called when any validation error is raised. In VB.NET, you can use the following to specify the event handler named MyValidationHandler for this event:

AddHandler rs.ValidationEventHandler, AddressOf MyValidationHandler

In C#, you would use:

rs.ValidationEventHandler += MyValidationHandler;

A simple event handler is used in the example page, which adds details of the validation error to the StringBuilder so that they can be displayed in the page afterwards. And, because we are handling the validation event ourselves, processing of the XML document continues when each error is detected:

Sub MyValidationHandler(ByVal sender As Object, ByVal e As ValidationEventArgs)
  ' display error details
  builder.Append("<p><b>ValidationEventHandler detected an error:</b><br />")
  builder.Append("Message = " & e.Message & "<br />")
  builder.Append("Severity = " & e.Severity.ToString() & " &nbsp; ")
  ' get line number and character offset from exception
  builder.Append("LineNumber = " & e.Exception.LineNumber.ToString() & " &nbsp; ")
  builder.Append("LinePosition = " & e.Exception.LinePosition.ToString() & "</p>")
End Sub


By default, only validation errors are reported when you validate an XML document. However, validation can also raise warnings that indicate a problem with the XML, but do not necessarily mean it is invalid. A prime example is when you are reading a fragment of XML that does not contain the matching namespace declaration. To see these warnings, you must handle the validation event yourself, as demonstrated in the previous section, and also turn on validation warnings by setting the ReportValidationWarnings flag in the ValidationFlags property of the XmlReaderSettings instance before you create the XmlReader:

rs.ValidationFlags = (rs.ValidationFlags _
                   + XmlSchemaValidationFlags.ReportValidationWarnings)

Now the custom event handler can report the validation warnings as well as validation errors. When a warning is encountered, the value of the Severity property of the ValidationEventArgs instance passed to the event handler will be "Warning".


Table of Contents
Click Here!

Article: Moving a Document to the SharePoint 2010 Records Center
In this article, we are going to build on the solution we have been working with over the last couple of weeks. You can review Part 1 and/or Part II at your convenience. In this installment, we are going to set up a simple one-step approval for the form and move it to a SharePoint 2010 Records Center once the form processing is complete.
Moving a Document to the SharePoint 2010 Records Center
Learn to set up a simple one-step approval for the form and move it to a SharePoint 2010 Records Center once the form processing is complete. >>

Using the Event Handler in SharePoint 2010
As organizations increase their use of SharePoint, users need more customized forms of solutions to address their requirements. Learn how to handle such requests. >>

The Definitive Guide to Windows Phone 7
The upcoming Windows Phone 7, announced by Microsoft in Spain in February, is unlike any previous mobile Windows version. >>
MARKETPLACE
Image Ad $112B in Federal IT Opportunities Download a free summary of Input’s new report: Federal IT Forecast 2010-2015. www.INPUT.com
Image Ad Business On Main: Online Community Free Online Tools and Resources To Help Start Or Grow Your Business. Join Today! www.BusinessOnMain.com
Image Ad Network Management Software Discover, Map, Monitor & Manage all network devices, Apps, AD, Services, etc. Try Free/Trial Edition www.OpManager.com

  • Part 1 - Using XmlReaderSettings, XmlReader, and the Static Create Methods
  • Part 2 - Using XmlWriterSettings, XmlWriter, and the Static Create Methods
  • Part 3 - Loading and Persisting XML with an XML Document Store Object


Reading Fragments of XML with an XmlReader

The XmlReader, by default, expects all XML documents to be well-formed. However, there are occasions when you want to read fragments of XML that may not be strictly well-formed, and also be able to validate these where possible. To read fragments of XML, you set the ConformanceLevel property of the XmlReaderSettings instance to ConformanceLevel.Fragment before you create the XmlReader(s):

rs.ConformanceLevel = ConformanceLevel.Fragment

However, XML fragments do not usually contain enough information for the XmlReader to be able to read the document. They may not contain the required namespace declarations, or the <?xml...?> declaration that defines the language, encoding and white-space treatment required for the document. In other words, the context for reading the document may well be missing.

To get round this, you will usually have to provide the missing information by creating and populating an appropriate XmlParserContext instance. This process starts by adding a new NameTable to hold the namespace declarations to the XmlReaderSettings and then creating a new XmlNamespaceManager over this. You then add the required namespaces to the XmlNamespaceManager:

rs.NameTable = New NameTable()
Dim nsm As New XmlNamespaceManager(rs.NameTable)
nsm.AddNamespace("rv", "http://myns/slidesdemo/reviewdate")

Then you can create the new XmlParserContext using the XmlNamespaceManager, and optionally include the language and white-space handling values you want. And, to specify the encoding of the document, you just set the Encoding property of the XmlParserContext instance to an appropriate encoding class instance:

Dim xpc As XmlParserContext = New XmlParserContext(rs.NameTable, _
                                  nsm, "en", XmlSpace.Default)
xpc.Encoding = New UTF8Encoding()

Then you can create the XmlReader from the XmlReaderSettings instance using the overload of the static Create method that accepts an XmlParserContext instance:

Dim xr As XmlReader = XmlReader.Create("C:\temp\myfile.xml", rs, xpc)

Now you can read XML fragments that match the settings in the XmlParserContext. The example page we've been using so far allows you to specify the following XML fragment as the source, and turn on fragment conformance, using code like that we've just been discussing. Notice that - with the exception of the reviewed element - the fragment does not contain any namespace declarations or prefixes. The namespace prefix on the reviewed element is acceptable because we create the NameTable containing this namespace declaration as part of the XmlParserContext we use to read this fragment:

<slides>
  <slide position="1">
    <title>Agenda</title>
    <rv:reviewed>2004-05-10T00:00:00</rv:reviewed>
  </slide>
  <slide position="2">
    <title>Introduction</title>
    <rv:reviewed>2003-10-22T00:00:00</rv:reviewed>
  </slide>
</slides>

Figure 5 shows the result, and you can see the contents of the XML fragment listed above. If you turn off fragment checking and try to read this fragment (whereupon the appropriate XmlParserContext is not created), you'll see that an error is raised because the "rv" prefix is not declared. 

Figure 5 - Reading an XML Fragment with an XmlReaderSettings and XmlReader Class

Validating Fragments of XML with an XmlReader

Validation is also supported for XML fragments, as you can see if you turn on validation in the example page. You can select an invalid fragment and try reading this to see the effects. The invalid fragment contains the element <rv:reviewed>yes</rv:reviewed>, which is illegal because the schema for this section of XML (slidesrev.xsd in the schema data\subfolder) defines this element as an xs:dateTimetype. Figure 6 shows the results.

Figure 6 - Validating an XML Fragment with an XmlReaderSettings and XmlReader Class

However, when you read fragments of XML, you often find that validation warnings are encountered. We specified that warnings should be raised by setting the ReportValidationWarnings flag in the ValidationFlags property of the XmlReaderSettings instance in our example when a custom error handler is used. If you set the checkboxes in the example page for validation, custom validation error handling and warnings reporting, as well as the fragment conformance option, you'll see these warnings appear when you attempt to read the slides-invalid-fragment.xml file - as shown in Figure 7.

Figure 7 - Displaying Validation Warnings and Errors for an XML Fragment

Using an XmlResolver to Limit Access to Resources

The final feature that the example we've been using so far demonstrates is how you can control access to resources when using an XmlReader. This could be useful if, for example, you want to limit access to a particular folder or set of XML disk files. By default, the XmlReader uses an XmlResolver that is created internally to resolve references, URLs and paths to the resources it uses. However, you can create your own XmlResolver instance and use this to set the XmlResolver property of the XmlReaderSettings instance before you create your XmlReader(s).

The first step is to create a PermissionSet that defines the permissions you will demand when the XmlReader tries to access a resource. By specifying PermissionState.None in the constructor, you indicate that no permission demand will be made - and so access will fail. Note that you must import the System.Security and System.Security.Permissions namespaces when writing code to control access to resources like this:

Dim ps As New PermissionSet(PermissionState.None)

Now you can create individual permissions, and add them to the PermissionSet. In the example page, we want to be able to access the folder named data that contains the XML disk files, and so we create a FileIOPermission instance that gives read access to this folder:

Dim fpdata As New FileIOPermission(FileIOPermissionAccess.Read, Server.MapPath("./data/"))
ps.AddPermission(fpdata)

Then we can create a new XmlSecureResolver (a class that inherits from XmlResolver) and specify this permission set, then use it to set the XmlResolver property of the XmlReaderSettings instance we're using:

rs.XmlResolver = New XmlSecureResolver(New XmlUrlResolver, ps)

If you run the example page, and set the checkbox to block access to all folders, you'll find that an error is displayed - as shown in Figure 8. This is because the code in the example page does not add the FileIOPermission to the PermissionSet unless you also set the "Allow access..." checkbox as well.

Figure 8 - Preventing Access to Resources with an XmlSecureResolver

This error is trapped by the Try..Catch construct around the call to the Create method of the XmlReaderSettings instance. We specifically catch instances of a SecurityException, and display the message then exit from the routine. The SecurityException class exposes a range of properties that describe the exception, but we're only using the Message property in our example page:

Try
  ' ... create the XmlReader using the XmlReaderSettings ...
Catch secx As SecurityException
  builder.Append("<p><b>ERROR creating XmlReader:</b><br />")
  builder.Append("Message = " & secx.Message & "</p>")
  Label1.Text &= builder.ToString()
  Return
Catch ex As Exception
  ' ... handle exceptions for other errors here ...
End Try

If you now set the checkbox to allow access to the data folder, the XmlReader is able to read the XML file and display the contents as it does when using its default XmlResolver.

Wrapping or "Pipelining" XmlReader Instances

One of the options when you create an XmlReader or XmlWriter using the static Create methods is to specify as the source (the first parameter of the Create method) another XmlReader or XmlWriter, or an existing TextReader or TextWriter. You can create a new XmlReader instance over an existing XmlReader or TextReader, and create a new XmlWriter instance over an existing XmlWriter or TextWriter.

This process is called wrapping or pipelining, and allows you to add new features to an existing reader or writer as you create a new instance from it. For example, you can add validation support to an XmlReader created over an existing XmlReader that does not validate the incoming XML, or even over a TextReader that is already referencing an XML document. Notice, however, that you cannot remove features that are already enabled on the source reader or writer. This could, if permitted, prevent the source reader or writer from behaving correctly.

We provide an example named pipelinereaders.aspx that demonstrates wrapping an XmlReader with another XmlReader. It starts by creating an XmlReader using an XmlReaderSettings instance in the same way as the previous example, but only sets a few properties of the XmlReaderSettings. The XmlReader is created over the same invalid XML document as you saw in the previous example:

' create an XmlReaderSettings instance and set some properties
Dim rs1 As New XmlReaderSettings()
rs1.CloseInput = True
rs1.IgnoreComments = True
rs1.IgnoreWhitespace = True

' declare a variable to hold an XmlReader
Dim xr As XmlReader = Nothing
Try
  ' create the XmlReader using this first XmlReaderSettings instance
  Dim sPath As String = Server.MapPath("data/slides-invalid-content.xml")
  xr = XmlReader.Create(sPath, rs1)
  builder.Append("Created non-validating XmlReader<br />")
Catch ex As Exception
  ' ... display error details here ...
End Try

Now a new XmlReaderSettings instance is created. By layering over an existing XmlReader, the new XmlReaderwill assume the settings of the existing XmlReader, which you can add to through the new XmlReaderSettings instance. In this case we'll add validation to the new XmlReader.


The next section of code shows the new XmlReaderSettings instance being created, and the validation features set in the same way as we did in the previous example. This includes adding a custom event handler to the ValidationEventHandler event of the XmlReaderSettings instance:

' create an new XmlReaderSettings instance and set some properties
Dim rs2 As New XmlReaderSettings()
' create and populate an XmlSchemaSet instance
Dim ss As New XmlSchemaSet()
ss.Add("http://myns/slidesdemo", Server.MapPath("data/schema/slides.xsd"))
ss.Add("http://myns/slidesdemo/reviewdate", Server.MapPath("data/schema/slidesrev.xsd"))
' add XmlSchemaSet to XmlReaderSettings and turn on validation
rs2.Schemas = ss
rs2.ValidationType = ValidationType.Schema
rs2.ValidationFlags = (rs2.ValidationFlags + XmlSchemaValidationFlags.ProcessSchemaLocation)
' add a custom handler for validation events
AddHandler rs.ValidationEventHandler, AddressOf MyValidationHandler

Now we create a new XmlReader using the new XmlReaderSettings instance, by specifying the original XmlReader as the first parameter of the Create method. Then we call a separate routine named ShowReadToMethods to display some values from the XML document:

' declare a variable to hold the validating XmlReader
Dim vxr As XmlReader = Nothing
Try
  ' create XmlReader using XmlReaderSettings instance and existing non-validating XmlReader
  vxr = XmlReader.Create(xr, rs2)
  ' display a couple of values from the invalid XML document
  ShowReadToMethods(vxr)
Catch ex As Exception
  ' ... display error details here ...
End Try

The ShowReadToMethodsroutine uses the new ReadToXxx methods of the XmlReader class, so we'll look at this code in the next section when we examine these methods in more detail. In the meantime, Figure 9 shows the result. You can see that the document has been validated as it was being read and displayed, and that processing does not stop when the first validation error is encountered. The output in the page shows each reader being created, the values of some nodes in the document, and the messages generated by the custom validation handler we specified when we created the second XmlReaderSettings instance.

Figure 9 - Wrapping One XmlReader with another XmlReader that Performs Validation

Two Useful New Features of the XmlReader Class

As well as the use of the static Create methods and the "settings" classes we've just described, the XmlReader in version 2.0 of System.Xml provides other new features and opportunities. The two we'll look at here are:

  • Reading up to specific elements or fragments
  • Reading typed values from an XML document

Reading Up To Specific Elements or Fragments

When reading XML documents with an XmlReader where you want to locate a specific element or attribute node, one of the most laborious and inefficient parts of the process is actually reading up to that node. In version 2.0, the XmlReader exposes some new methods that you can use. These are the ReadToDescendant, ReadToFollowing and ReadToNextSibling methods, which allow you to easily skip over nodes and content until you arrive at the element node you require.

The example page named pipelinereaders.aspx we used in the previous section demonstrates some of these methods. After creating the XmlReader that performs validation, the code calls a routine named ShowReadToMethods, passing in the XmlReader. This listing shows the ShowReadToMethods routine in full. You can see from this how easy it is to navigate through a document using these new methods:

Sub ShowReadToMethods(ByVal vxr As XmlReader)

  ' move to the first descendant slide element
  builder.Append("Executing the ReadToDescendant(""slide"") method<br />")
  If vxr.ReadToDescendant("slide") Then
    builder.Append("Found element '" & vxr.Name)
    ' display the value of the position attribute
    vxr.MoveToAttribute("position")
    builder.Append("' with position attribute = '" & vxr.Value & "'<br />")
  Else
    builder.Append("Cannot execute the <b>ReadToDescendant</b> method.<br />")
  End If

  ' move to the next slide element
  builder.Append("Executing the ReadToNextSibling(""slide"") method<br />")
  If vxr.ReadToNextSibling("slide") Then
    builder.Append("Found element '" & vxr.Name)
    ' display the value of the position attribute
    vxr.MoveToAttribute("position")
    builder.Append("' with position attribute = '" & vxr.Value & "'<br />")
  Else
    builder.Append("Cannot execute the <b>ReadToNextSibling</b> method.<br />")
  End If

  ' move back to element so that ReadToDescendant can be called next
  vxr.MoveToElement()

  ' move to the title element
  builder.Append("Executing the ReadToDescendant(""title"") method<br />")
  If vxr.ReadToDescendant("title") Then
    builder.Append("Found element '" & vxr.Name)
    ' display the value of the element
    vxr.Read()
    builder.Append("' with value = '" & vxr.Value & "'<br />")
  Else
    builder.Append("Cannot execute the <b>ReadToDescendant</b> method.<br />")
  End If

  ' move to the third slide element
  builder.Append("Executing the ReadToFollowing(""slide"") method<br />")
  If vxr.ReadToFollowing("slide") Then
    builder.Append("Found element '" & vxr.Name)
    ' display the value of the position attribute
    vxr.MoveToAttribute("position")
    builder.Append("' with position attribute = '" & vxr.Value & "'<br />")
  Else
    builder.Append("Cannot execute the <b>ReadToFollowing</b> method.<br />")
  End If

  ' move back to element so that ReadToDescendant can be called next
  vxr.MoveToElement()

  ' move to the reviewed element
  builder.Append("Executing the ReadToDescendant(""reviewed"", _
                 ""http://myns/slidesdemo/reviewdate"") method<br />")
  ' NOTE: could have used just "rv:reviewed" here instead
  If vxr.ReadToDescendant("reviewed", "http://myns/slidesdemo/reviewdate") Then
    builder.Append("Found element '" & vxr.Name)
    ' display the value of the element
    vxr.Read()
    builder.Append("' with value = '" & vxr.Value & "'<br />")
  Else
    builder.Append("Cannot execute the <b>ReadToDescendant</b> method.<br />")
  End If

End Sub

You can see from this that the ReadToXxx methods return a Boolean value that indicates if they managed to move to the specified nodes in the document. The routine displays a message in the page before each call to the ReadToDescendant, ReadToFollowing and ReadToNextSibling methods, and the name and value of the node it moved to if the method succeeds (for the slide elements that have no value, it displays the value of the position attribute instead). If it cannot perform the move, the routine displays a message to this effect.

If you look back at Figure 9, you'll see the results. The code starts by moving to the first descendant slide element using ReadToDescendant("slide"), and then to the next slide element by calling ReadToNextSibling("slide"). This element has an invalid value for its position attribute, as indicated by the text generated by the custom validation handler included in the page. Next, the code calls the MoveToElement method so that the reader is positioned on the slide element itself, and not on the child text node, before calling ReadToDescendant("title") to move to the title element within this slide element.

At this point, the only way to get back to previous level in the node hierarchy, to be able to move to the next slide element, is to call ReadToFollowing("slide"). This method moves through the document in the order that the nodes appear in the XML, rather than in a hierarchical manner. Notice that, on the way there, the reader has to read the reviewed child element of the current slide element, which also contains an invalid value - as shown by the second validation message in the page.

After displaying the value of the position attribute of the third slide element, the code calls MoveToElement to get back to the element node, and then ReadToDescendant("reviewed","http://myns/slidesdemo/reviewdate") to get to the reviewed element. The reviewed element is in a separate namespace and has the prefix "rv", and so we specify the namespace URI as well as the local name of the element. Alternatively, as noted in the comments in the code, we could specify the qualified name of the element instead - using the more compact form ReadToDescendant("rv:reviewed").

Reading Typed Values from an XML Document

The XML Infoset model effectively views XML documents as typed data - often as the equivalent of rowsets such as you'd find in an ADO.NET DataTable or DataSet. This is achieved by layering the schema over the XML so that each node (element or attribute) is exposed as an instance of the relevant data type. In the System.Xml classes, this means standard CLR types such as String, Int32, Boolean, DateTime, etc. To allow you to access documents as typed data, the XmlReader exposes a series of new methods named ReadContentAsXxx and ReadElementContentAsXxx, where Xxx is the name of the data type. There is also a generic ReadValueAs method, where you specify the data type of the node that you want to query.


The example page we used at the start of this article reads some values from the XML document as CLR typed instances using the ReadContentAsXxx and ReadElementContentAsXxx methods. It reads the value of the position attribute on each slide element as an Int32 integer value (these are defined in the schema as of type xs:unsignedByte) using the ReadContentAsInt method of the XmlReader class. It also reads the value of the reviewed elements for each slide, which are defined in the schema as of type xs:dateTime) as DateTime instances using the ReadElementContentAsDateTime method.

After creating the XmlReader, the code calls the Read method repeatedly (until it returns False), so that each node is read from the XML document in turn. If the current node is an element, and this is the start tag, the name and the value type name (as returned by the ValueType property) are added to the StringBuilder that will display the results after the complete document has been processed. However, if validation is enabled for the XmlReader (the checkbox named chkValidate will be set in this case in our example), the schema will expose the values as the correct data types and so we can use the appropriate method to extract the value as a CLR data-typed instance. We do this for the reviewed element, using the ReadElementContentAsDateTime method:

While xr.Read()
  If xr.IsStartElement() Then
    builder.Append("Element Name: " & xr.Name)
    builder.Append(" &nbsp; ValueType: " & xr.ValueType.ToString() & "<br />")
    If chkValidate.Checked And xr.LocalName = "reviewed" Then
      Dim dt As DateTime = xr.ReadElementContentAsDateTime()
      builder.Append("Element Typed value: " & dt.ToString() & "<br />")
    End If
    ...

Now the code checks if the current element node has any attributes. If so, it iterates through them in the same way as you would in System.Xml version 1.x when using an XmlTextReader. The name, value type and value of each one can then be displayed. However, when validation is enabled and the current attribute is named position, the code can call the ReadContentAsInt method of the XmlReader to get the value as an Int32 type as well:

    ...
    If xr.HasAttributes Then
      While xr.MoveToNextAttribute()
        builder.Append(" - Attribute Name: " & xr.Name)
        builder.Append(" &nbsp; ValueType: " & xr.ValueType.ToString())
        builder.Append(" &nbsp; Value: '" & xr.Value & "'")
        If chkValidate.Checked And xr.LocalName = "position" Then
          Dim pos As Int32 = xr.ReadContentAsInt()
          builder.Append(" &nbsp; Typed value: " & pos.ToString())
        End If
        builder.Append("<br />")
      End While
    End If
  End If
  ...

Finally, the code checks to see if the current node is the child text node that contains the value of an element (XmlNodeType.Text). Elements in an XML document have their value stored in a child node, and so this must be handled separately when using an XmlReader.  In this case there is no node name (the parent element node contains the name), but the value can be extracted and displayed:

  ...
  If xr.NodeType = XmlNodeType.Text Then
    builder.Append("Element String Value: '" & xr.Value & "'" & "<br />")
  End If
End While

Figure 10 shows the readersettings.aspx example page displaying the XML content when validation is disabled and when it is enabled. You can see the CLR data type names returned by the ValueType property, and the typed value that is obtained by calling the appropriate ReadContentAsXxx method when validation is enabled. The position attribute actually appears as a System.Byte type, but there is no ReadContentAsByte method so the ReadContentAsInt method is used instead to return an Int32 instance. And the reviewed element appears as a DateTime type as expected, but notice that calling any of the ReadElementContentAsXxxx methods consumes (i.e. reads) the element value - the child text node - so it does not appear when the code checks for nodes of type XmlNodeType.Text at the end of the iteration loop.

Figure 10 - Reading Values from an XML Document as CLR Typed Instances

Summary

In this series of three articles, we explore how the new features of the XmlReader and XmlWriter classes in version 2.0 of the .NET Framework can be used to read and write XML documents, and interact with the new XML document store objects. In this first article, we've concentrated on the XmlReader class, and the new XmlReaderSettings class that makes it easy to generate single or multiple instances of XmlReader with a range of useful properties. We looked at:

  • The new "settings" classes and static Create methods for XmlReader and XmlWriter
  • Creating and using an XmlReader to read XML documents and fragments
  • Two of the useful new features of the XmlReader class

The XmlReaderSettings and XmlWriterSettings classes hold a wide range of settings that you may need to apply when you create an XmlReader or an XmlWriter. In conjunction with the new static Create methods of XmlReader and XmlWriter, they allow you to store these settings for use whenever you need to create a reader or writer, saving time and making the whole process a lot more transparent and efficient.

The XmlReaderSettings class provides features that allow you to specify the general behavior of the XmlReader(s) you create, such as reading or ignoring DTDs, schemas, white-space, comments, etc. It also provides features to add validation for XML documents or fragments of XML, control access to resources, add credentials for accessing remote or secured resources, and more.

The XmlReader class itself also exposes several useful new features. In particular, in this article, we looked at how navigation in a document is improved through the new ReadTo methods, and how you can now access the content of the XML as CLR typed values.

In the next article, we'll move on to look at the XmlWriter class, and the corresponding XmlWriterSettings class, to see how they make it easier to create and user writers in version 2.0 of the .NET Framework.