Quantcast
Channel: Rick Strahl's FoxPro and Web Connection Weblog
Viewing all 133 articles
Browse latest View live

Syntax Errors in the FoxPro Editor caused by Extended Characters

$
0
0

I keep running into weird errors, when trying to save program files in the FoxPro editor on occasion. For some reason I end up with errors like the following:

What could be wrong with that line of code? Even if the variable name was invalid the compiler should always be able to process a simple expression like this, right?

10 minutes later and after checking the code around it, I finally figure out that I have to...

Watch those Control Characters

If you look closely at the line of code highlighted, you'll notice that there's an extra space at the end of that line. Or rather what appears to be an extra space.

As it turns out that's no space - it's an invisible control character sequence that I accidentally inserted by way of my Visual Studio biased fingers :-) I pressed Ctrl-Shift-B to build my project (a Visual Studio key combo) which in turn embedded a character combination into the editor. That invisible character is interpreted as an extra character on the line of code and so the line actually becomes invalid.

It becomes obvious if you take the text and paste it into another editor like Sublime Text:

The result: The code doesn't compile and you get the above error. Remove the extra character an life is good again.

Moral of the Story

This happens to me on a regular basis. The FoxPro editor is notorious for its crappy handling of control characters - even those it knows about like Ctrl-S. For example, if you hit Ctrl-S multiple times in a row quickly the first time will save, while subsequent Ctrl-S combos will inject characters into the document.

Ctrl-S Failure

This also causes syntax errors which often get left in the document, but at least this one you can see and the compiler can point you at the offending line or code block.

Other key combos though - like my Ctrl-Shift-B compile twitch are more difficult to catch when the compiler complains because they are invisible and it looks like there's nothing wrong.

The ultimate moral of the story is: If you see an error that clearly isn't an error make sure it's not just an extra character that snuck into the document.

this post created with Markdown Monster

Web Connection 6.10 Release Notes

$
0
0

West Wind Web Connection 6.10 has been released today and there are a a number of enhancements and a few new features in this update release. There are also a couple of breaking changes, so be sure to read those if you are upgrading from previous versions.

I'm going to break down the changes into Web features that are specific to Web Connection and General Purpose features that concern the more general Internet and Client Tools functionality, that will eventually also find their way into West Wind Internet And Client Tools.

This is a fairly long post - if you just want the quick and dirty list go to

Otherwise here's more detail:

Web Features

Let's start with the Web Features that are relevant for building Web applications with Web Connection.

Html Encoded Script Expressions: <%: %>

When building Web content it's important that you Html encode content. Html content encoding essentially makes HTML text safe, and is a huge step towards preventing Cross site scripting attacks. Html Encoding basically takes angle brackets (< and >), quotes, ampersands and a few other characters and turns them into HTML safe entities that are not evaluated. By doing so you prevent script injection, so if anybody tries to inject a script tag like <script>alert("Gotcha")</script> into your content that attempt will be foiled.

Web Connection has always included an EncodeHtml() function that can be used so you can always do:

<%= EncodeHtml(poOrder.Notes) %>

With Web Connection 6.10 there now is an easier way to do the same thing:

<%: poOrder.Notes %>

So rather than having to explicitly specify the EncodeHtml() function using <%: expression %> does the same thing. The syntax is compatible with ASP.NET WebForms which uses that same syntax for encoded content.

It's a nice convenience feature and I recommend you use it on all of your script expression tags, except where you explicitly do not want it!

Updated Visual Studio Addin

We've updated the West Wind Web Connection Visual Studio add in and added it to the Visual Studio Gallery.

This means the addin can now be installed from Visual Studio itself via Tools -> Extensions and Updates. Because it's an installed extension in Visual Studio and it lives in the Extension Gallery, the addin can now automatically update itself an update is available. It should show on the Updates tab in the Extension Manager and on the Visual Studio Notifications list.

The new addin also supports Visual Studio 2017 which brings a number of very cool productivity enhancements and a much more lightweight Web development experience.

wwRequest::GetLogicalPath() now returns Proxied Urls

The Request.GetLogicalPath() now properly returns the active URL the user sees, even if the URL was rewritten by tooling like IIS UrlRewrite or and internal proxy redirection.

For example, if you are using UrlRewrite to route extensionless URLs to a Web Connection Process class (UrlRewriteHandler) you now get:

** Original Url is: http://localhost/albumviewer/api/album/516
lcUrl = Request.GetLogicalPath()
* lcUrl  =  /albumviewer/api/album/516
*   not  =  /api/UrlRwriteHandler.av  (redirected url)

lcRedirectedUrl = Request.ServerVariables("PATH_TRANSLATED")
* lcUrl  =  /api/UrlRwriteHandler.av  (redirected url)

Previously GetLogicalPath() would always return the redirected path only.

If you are using URL redirection on your Web Server, you probably know that when you rewrite a URL on the server to a new location the original URL is lost to the redirected target URL.

A typical example for Url Rewriting is to rewrite Extensionless Urls to a specific Web Connection Url. For example, check out this request trace from an Extensionless URL in the AlbumViewer:

The original URL is:

/albumviewer/api/album/516

and it's rewritten to:

/albumviewer/api/RewriteHandler.av

The RewriteHandler is Web Connection's route handler that gets fired when a re-written request is found and you can overwrite this handler to route requests to the appropriate handler. The most common thing to do is simply route a method of the current class. You can find out more about this process in the documentation.

If you are logging requests or otherwise want to find out what the original URL the user sees in the address bar is, you had to explicitly look at the HTTP_X_ORIGINAL_URL header value in Request.ServerVariables().

Web Connection 6.10 now always returns the original URL when a request is proxied. The logic internally first checks for the proxy path and if found uses that. If not found that then PATH_INFO variable is returned as before.

This is important for applications that generate URLs on the fly and need to figure out relative paths or fully qualified to embed into the page or to send out as email links for example. It's a minor feature, but an important one for those of you that use URLRewrite.

Admin Script Compilation

The Admin page can now properly compile MVC Style Script pages via the Admin page and this link:

This operation can run through either a single folder or all folders of your site and find all matching files you specify via the wildcard and recompile the script files.

While this feature was there previously it didn't actually work with the new scripting engine, and it didn't support recursive compilation.

Using this option can allow you to run with pre-compiled scripts if you didn't explicitly run through all scripts and upload them to your site.

There's also a new wwScripting::CompileAspScript() method that lets you compile and individual script. The above script compilation features use this method to handle the script compilation. You can look at WCSCompile() in wwServer.prg if you want to see how that works.

wwUserSecurity Password Encryption

We've added the ability to encrypt passwords in the wwUserSecurity class by setting the new cPasswordEncryptionKey property to a string value used as the hash encryptionkey.

If the cPasswordEncryptionKey property is set, any SaveUser() operation on the object causes the password property to be encrypted if it is not encrypted yet. Encrypted passwords are post-scripted with ~~ to detect whether the field is encrypted or not.

The password encryption uses id salted SHA512 hashing to produce the password hash used in the user security table.

By default cPasswordEncryptionKey is empty so no password encryption occurs unless you explicitly specify it.

If you plan on using this feature I would highly recommed that you sublcass the wwUserSecurity class and set the cPasswordEncryptionKey as part of the class. How you set the value is up to you, whether it's simply a static value you assign or whether you retrieve the key from some known save location like Azure Key Storage or likewise service.

Password Hashing is One-Way

Please note that once you encrypt passwords you can't retrieve them for users. Hashing is basically a one way trip and any authentication that compares passwords basically hashes an input value to match stored password hash. The only way to 'fix' a password for a user if they've lost it, is for them to create a new one.

wwUserSecurity Structure Changes

As part of the updates for encrypted passwords we also made some changes to the structure of the wwUserSecurity table. These changes go beyond the password field, but since we had to make changes anyway we updated the table to use VARCHAR characters for all text fields.

The new structure is:

CREATE CURSOR usersecurity ;
(    PK          V (40),;
   USERNAME    V (80),;
   PASSWORD    V (80),;
   FULLNAME    V (80),;
   MAPPEDID    V (40),;
   EMAIL       M ,;
   NOTES       M ,;
   PROPERTIES  M ,;
   LOG         M ,;
   LEVEL       Y ,;
   ADMIN       L ,;
   CREATED     T ,;
   LASTON      T ,;
   LOGONCOUNT  I ,;
   ACTIVE      L ,;
   EXPIRESON   D )

If updating from the old version you should also run the following command to trim white spaces off the fields:

REPLACE ALL PK with TRIM(PK), ;
            USERNAME WITH TRIM(USERNAME),;
            PASSWORD WITH TRIM(PASSWORD),;
            FULLNAME WITH TRIM(FULLNAME),;
            MAPPEDID WITH TRIM(MAPPEDID)

Breaking Change

The change above is a breaking change and you

Updated to a new Markdown Parser Library

Markdown conversion was introduced in Web Connection 6.0. Markdown is a simple text editing format that generates HTML output using a very simple markup language that can be easily typed as text. Markdown is awesome to use instead of text input as it allows simple markup like bold and italic text, lists, headers and so in with a very text like format that doesn't require a special editor. Adding simple interactive editing features like a toolbar is also pretty easy to accomplish just with some simple javascript.

Web Connection's Markdown support comes via the MarkdownParser class and more typically through the Markdown function that it exposes. To parse Markdown you can simply do:

lcMarkdown = "This is some **bold** and *italic* text"
lcHtml = Markdown(lcMarkdown)

More commonly though you're likely to use markdown in your HTML pages to write out rich content. For example on the message board each message's body in a thread is displayed with:

<div class="message-list-body"><%= Markdown(loMessage.oMessage.Body) %></div>

or if you have a custom configuration options for formatting the Markdown:

<div class="message-list-body"><%= poMdParser.Parse(loMessage.oMessage.Body) %></div>

In Web Connection 6.10 we've switched from the .NET CommonMark.NET package to the MarkDig parser. MarkDig supports Github flavored markdown, automatic URL linking, and a slew of other standards that sit on top of markdown out of the box that in older versions we had to implement on our own. Besides the simplicity Markdig also is much easier to extend and quite a bit faster especially since it can perform most of the add-on operations we needed to do in FoxPro previously now internally.

Breaking Change

This is a breaking change and in order to use the Markdown Features in Web Connection 6.10 you need to make sure you include the Markdig.dll with your Web Connection distribution. This replaces the CommonMarkNet.dll that was previously used.

General Purpose Features

The following features are focused on the general purpose library portion of Web Connection. These are also features that will show up in future versions of West Wind Client Tools.

SFTP Support with the wwSFTP Class

One of the most requested features in both Web Connection and Client Tools over the years has been support for secure FTP. Secure FTP is a tricky thing to provide as there are several standards and because the built-in Windows library that Web Connection uses - WinINET - doesn't support any secure FTP features.

In Web Connection 6.10 there's now support for SFTP which is FTP over SSH via the wwSftp Class. The class is based on the familiar wwFTP class and the interface to send and receive files remains the same as with the original wwFtp class.

loFtp = CREATEOBJECT("wwSftp")
loFtp.nFtpPort = 23

lcHost = "127.0.0.1"
lnPort = 23
lcUsername = "tester"
lcPassword = "password"

*** Download
lcOutputFile = ".\tests\sailbig.jpg"
DELETE FILE lcOutputFile

lnResult = loFtp.FtpGetFile(lcHost,"sailbig.jpg",".\tests\sailbig.jpg",1,lcUsername,lcPassword)

this.AssertTrue(lnResult == 0,loFtp.cErrorMsg)
this.AssertTrue(FILE(lcOutputFile))


*** Upload a file
lcSourceFile = ".\tests\sailbig.jpg"
lcTargetFile = "Sailbig2.jpg"

lnResult = loFtp.FtpSendFile(lcHost,lcSourceFile,lcTargetFile,lcUsername,lcPassword)
this.AssertTrue(lnResult == 0,loFtp.cErrorMsg)

There are both high level (all in one upload/download file functions) and low level functions. The low level function require that you open a connection explicitly and fire each operation, potentially multiple times. Again this maps the existing old wwFtp functionality:

loFtp = CREATEOBJECT("wwSftp")

loFtp.cFtpServer =  "127.0.0.1"
loFtp.nFtpPort = 23
loFtp.cUsername = "tester"
loFtp.cPassword = "password"

loFtp.FtpConnect()

*** Change to a specific folder - convenience only - you can reference relative paths
this.AssertTrue(loFtp.FtpSetDirectory("subfolder"),loFtp.cErrorMsg)

*** Create a new directory
this.AssertTrue(loFtp.FtpCreateDirectory("SubFolder2"),loFtp.cErrorMsg)

*** Send a file into the new directory
this.AssertTrue(loFtp.FtpSendFileEx(".\tests\sailbig.jpg","subfolder2/sailbig.jpg")==0,loFtp.cErrorMsg)

*** Download the file just uploaded
this.AssertTrue(loFtp.FtpGetFileEx("subfolder2/sailbig.jpg",".\tests\sailbig2.jpg")==0,loFtp.cErrorMsg)

*** Delete it
this.AssertTrue(loFtp.FtpDeleteFile("subfolder2/sailbig.jpg")==0,loFtp.cErrorMsg)

*** And delete the folder
this.AssertTrue(loFtp.FtpRemoveDirectory("Subfolder2"),loFtp.cErrorMsg)

No FTPS support

Note that this feature does not support FTPS which is yet another protocol that uses TLS over FTP which is considerably less common than the SFTP over SSH protocol implemented by this class.

wwUtils::GetUniqueId()

This routine lets you generate semi to fully unique IDs based on GUIDs. You can specify a size between 15 and 32 characters with 32 characters preserving the full fidelity of a GUID. Any value smaller gives you somewhat unique (definitely a lot more unique than SYS(2015) though) values.

wwUtils::SplitString()

This function provides the same functionality as ALINES() but adds the important abililty to split very long lines (that exceed 254 characters) with MEMLINES() into additional lines.

This is critical for any sort of code parsing library that generates code with string literals which cannot exceed FoxPro's literal string limit.

This function is used internally in wwScripting and webPageParser and the Markdown parser but can be useful for anything else that needs to generate code output.

wwEncryption::ComputeHash() adds HMAC Hash Algorithms

wwEncryption::ComputeHash() adds HMACSHA1, HMACSHA256, HMACSHA384 and HMACSHA512 algorithms. HMAC algorithms use complex salting cycles to add complexity and delay to generated hashes using an industry standard and repeatable algorithm.

Note the HMAC related functions require that you specify a Salt value for the hash.

wwDotnetBridge now supports TLS 1.2

wwDotnetBridge fires up a new instance of the .NET Runtime inside of Visual FoxPro when it launches and as such any configuration set for the app has to be set as well. A number of people using wwDotnetBridge ran into problems with HTTPS content not working correctly because older versions of .NET don't default to support TLS 1.2.

In Web Connection 6.10 we always enable TLS 1.2 support (and conversely disable obsolete and insecure SSL3 support). The old version only allows support for TLS 1.0. This affects any HTTP clients whether directly using HTTP clients or using libraries (such as Credit Card Processing APIs, REST APIS etc.) that use HTTPS internally.

Summary

Phew, a lot of stuff in this release. There are also a number of small bug fixes and minor performance tweaks based on Message Board discussions in the last few months.

As always, we actively encourage feedback, so if you run into a bug or have a feature request, let us know by posting a message in the support forum.

this post created with Markdown Monster

Creating Truly Unique Ids in FoxPro

$
0
0

Generating ids is a common thing for Database applications. Whether it's for unique identifiers to records in a database, whether you need to send a unique, non-guessable email link to a customer or create a temporary file, unique IDs are common in software development of all kinds.

Why not Sys(2013)?

FoxPro internally includes a not so unique id generation routine in SYS(2015):

? SYS(2015)
* _4UJ0VDHVX

This works for some things as long as they are internal to the application. But there are a lots of problems with this approach:

  • The values are easily guessable as they are based on sequential timestamps
  • Not unique across machines
  • The id value is too short
  • Duplication rate can be very high

SYS(2015)'s original purpose was internal to Foxpro for generating unique procedure names for generated code for some of the FoxPro tools. It worked fine because when it was created we had a single application running. Within a single application Ids are unique, but as soon as you throw in multiple applications either on the same machine or the network SYS(2015) is no longer able to even remotely guarantee unique ids.

For anything across processes or machines SYS(2015) is unacceptable. This can be mitigated somewhat by adding process or thread Ids to the string, but still there is too much possibility of conflict. Because the actual ID (minus the leading _) is only 9 character, the chance for duplication is also pretty high once the timestamp ‘rounds’ around. If you account for different timezones and multiple machines you find that IDs are not anywhere near 'unique'.

Guids

One way to ensure you generate truly unique IDs is to generate GUIDs. Guids are guaranteed to be unique across time and space (machines) as they are based on an algorithm that is based on a timestamp and a machine's MacId. Guids are safe and relatively easy to generate even in FoxPro:

FUNCTION CreateGUID
************************************************************************
* wwapi::CreateGUID
********************
***    Author: Rick Strahl, West Wind Technologies
***            http://www.west-wind.com/
***  Modified: 01/26/98
***  Function: Creates a globally unique identifier using Win32
***            COM services. The vlaue is guaranteed to be unique
***    Format: {9F47F480-9641-11D1-A3D0-00600889F23B}
               if llRaw .T. binary string is returned
***    Return: GUID as a string or "" if the function failed 
*************************************************************************
LPARAMETERS llRaw
LOCAL lcStruc_GUID, lcGUID, lnSize

DECLARE INTEGER CoCreateGuid ;
  IN Ole32.dll ;
  STRING @lcGUIDStruc
  
DECLARE INTEGER StringFromGUID2 ;
  IN Ole32.dll ;
  STRING cGUIDStruc, ;
  STRING @cGUID, ;
  LONG nSize
  
*** Simulate GUID strcuture with a string
lcStruc_GUID = REPLICATE(" ",16) 
lcGUID = REPLICATE(" ",80)
lnSize = LEN(lcGUID) / 2
IF CoCreateGuid(@lcStruc_GUID) # 0
   RETURN ""
ENDIF

IF llRaw
   RETURN lcStruc_GUID
ENDIF   

*** Now convert the structure to the GUID string
IF StringFromGUID2(lcStruc_GUID,@lcGuid,lnSize) = 0
  RETURN ""
ENDIF

*** String is UniCode so we must convert to ANSI
RETURN  StrConv(LEFT(lcGUID,76),6)
* Eof CreateGUID

To use these Guid routines:

? CreateGuid()
* {344986DB-D674-42BD-9A2E-A7833B190E05}

? CreateGuid(.t.)
*‰öÓeî‹èK“§Y‚þ¡I

? LOWER(CHRTRAN(CreateGuid(),"{}-"))
* 6823a6f0af7040318964e74cc8a78833

The middle one represents binary characters. Typically you wouldn't use that except for direct storage to a binary field (using CAST() most likely). The last value is what I recommend if you use GUIDs in any sort of user facing scenario. Using lowercase values makes it much easier to read the long value.

Guids are safe and guaranteed to be unique, but they are big. Even if you strip out the {}- from the string, it's still 32 characters. The binary value is 16 bytes, which is better, but for FoxPro data the last thing you'd want to do is use binary data for a field especially an indexable one.

Guids as keys also are a problem because they are truly random. There's very little commonality between one GUID and another, so any indexing scheme can't really pack GUIDs. Coupled with the large string size GUID indexes tend to be larger than other indexes.

Creating custom Variations off of Guids

In West Wind Web Connection we've had to use unique ids for a long time. Session tables in particular - with their potentially high volume insert/read operations - have always needed to have unique values that were unique across machines. I've gone through a number of iterations with this starting originally with SYS(2015) plus tacked on process and threadIds plus random characters.

But more recently (with Web Connection 6.0) I switched to using subsets of Guids and finally more recently I built a new routine that can strip down a Guid to a 16 character string safely.

How do you fit a 32 character string into 16 characters? Simple: GUIDs use hex value notation which means that the number of characters used is actually twice the number of bytes involved in the actual GUID binary. If you break down that each byte value into the full alphabet, digits and perhaps a few symbol characters you can get pretty close to representing GUIDs in full. Note you still lose some fidelity here - because we're shoehorning 255 hex values down to about 70, but in my testing running 10 million guids in a single run and over a billion in aggregate, I've not been able to generate any duplicates. That doesn't mean it can't happen but it's very, very unlikely. If you need 100% guarantees then stick with Guids - otherwise this variation is good enough.

The current routine that West Wind Web Connection and the West Wind Client Tools (versions 6.10 and later) use is this:

************************************************************************
*  GetUniqueId
****************************************
***    Author: Rick Strahl, West Wind Technologies
***            http://www.west-wind.com/
***  Function: Create a unique ID based on a Guid spread over 
***            full alpha, digit and some symbols
***    Assume:
***      Pass: lnLength = length between 8 and 16 - 16 is full Guid
***    Return:
************************************************************************
FUNCTION GetUniqueId(lnLength,llIncludeSymbols,lcAdditionalChars)
LOCAL lcChars, lcGuid, lcId, lnX, lcHex, lnHex, lcGuidBinary

IF VARTYPE(lnLength) # "N" 
   lnLength = 16
ENDIF
IF lnLength < 8
   lnLength = 8
ENDIF
IF lnLength > 16
   lnLength = 16
ENDIF   
IF EMPTY(lcAdditionalChars)
   lcAdditionalChars = ""
ENDIF   

lcChars = "abcdefghijkmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890" + IIF(llIncludeSymbols,"!@#$%&*+-?","") + lcAdditionalChars
lcGuidBinary = REPLICATE(" ",16) 

DECLARE INTEGER CoCreateGuid ;
  IN Ole32.dll ;
  STRING @lcGUIDStruc
CoCreateGuid(@lcGuidBinary)

lcId = ""
FOR lnX = 1 TO lnLength
   lnHex = ASC(SUBSTR(lcGuidBinary,lnX,1)) % LEN(lcChars)
   lcId = lcId + SUBSTR(lcChars,lnHex + 1,1)   
ENDFOR

RETURN lcID
*   GetUniqueId

The routine bascially grabs a new Guid and the breaks the Guid's bytes out into values that are provided as part of a ‘string array’ - a string of allowable characters. The code loops through all the bytes and pushes them into a new string based on the 'string array'.

To use this:

? GetUniqueId()   && Full 16 chars
* z9h8snad4dwe18sk

? GetUniqueId(8)  && Minimum
* cxip5nre

? GetUniqueId(20) && stripped to 16
* pfdogee6kd29978j

Note that you can pass a parameter for the number of characters to generate for the ID. The more characters you choose the more reliable the id up to 16. For small local scenarios 8 characters are going to be enough. Again running in tests I was unable to generate duplicate IDs in a single run, although running in the billion operations I managed to generate a total of 2 dupes. That's very low and goes up significantly as you add more characters.

This routine replaces my existing GetUniqueId() routine in Web Connection. The main change here is that the actual string generated is a lot shorter than the old routine. The old one required 15 characters up to 32. Here we require 8 up to 16. If a string greater than 16 is requested only 16 characters are returned. This should be Ok for backwards compatibility with VarChar(x) types in the DB the values will just work and with char(x) extra spaces fill out the number.

Performance

These routines are not blazingly fast especially if compared to SYS(2015). Most of this is due to the complexity of GUID generation and the FoxPro interop required to call it as well as the limited character iteration support in FoxPro - using SUBSTR() to iterate over each character in a string is very very slow. Interop in FoxPro has a bit of overhead and the routines require Unicode to ANSI conversions internally. Still on my machine I generate 10,000 ids in 2.5 seconds which puts the creation time roughly at 1/4 millisecond which is acceptable for a nearly unique, reasonably sized Id. By comparison though, SYS(2015) took less than a quarter second for those same 10,000 generations.

Summary

Remember that this last routine is not 100% guaranteed to be unique - but it's pretty close. If you need 100% guaranteed unique IDs stick with full GUIDs. Personally I feel pretty confident that there won't be any dupes with the GetUniqueId() routine even if I have a fully distributed application where data is entered in multiple locations.

There you have it - a few ways to generate unique IDs in FoxPro. Enjoy.

this post created with Markdown Monster

Controlling the JSON.NET Version in wwDotnetBridge with Assembly Redirects

$
0
0

Round Hole, Square Peg

West Wind Web Connection and West Wind Internet And Client Tools include JSON parsing features that are provided through .NET and the wwDotnetBridge extension that bridges to the popular JSON.NET .NET component library. JSON.NET is the most widely used .NET JSON parsing library and the wwJsonSerializer class utilizes it for its DeserializeJson() parsing code. The method basically passes a JSON input, lets JSON.NET parse it into an internal object tree, which is then unpacked into a clean FoxPro object, collection or value.

A History of wwJsonSerializer

Initially wwJsonSerializer used a FoxPro based parser, which was both slow and not very reliable. FoxPro has a number of limitations when it comes to string parsing the worst of which is that there's no efficient way to parse a string character by character. Using SUBSTR(string,1) is excruciatingly slow in large strings and in order to build an effective parser you have to parse strings one character at a time. When I built the original parser I took a few shortcuts to avoid the char by char parsing and it resulted in not very stable JSON parsing with many edge cases that just didn't work.

Bottom line - building an effective parser is something better left to people who specialize in it, and JSON.NET is a proven open source library that's used by Microsoft in most of their Frameworks. If it's good enough for them it's good enough for me ??

I've been using this setup for a number of high throughput service applications and this setup of JSON parsing has worked out very well - it's much faster than the manual parsing of the old code and even with the overhead of creating a FoxPro object out of the JSON object graph, it's still very speedy. The results are also reliable. I have yet to see a de-serialization failure on any valid JSON input.

JSON.NET Version Issues

As cool as JSON.NET usage in West Wind products is, there are also some issues. Because JSON.NET is so widely used in .NET, it's quite likely that you will run into other .NET components that also use JSON.NET - and quite likely use a different version of it. Since .NET can only load one version of a library at a time, this can cause a problem as one component will not be able to load the version of JSON.NET that it's binding to.

.NET is a statically linked runtime environment so binaries are almost always tied to a specific version number of the component. So if you have two components or an application and components trying to use different versions of the same library there can be a conflict.

Luckily .NET provides a workaround for this in most situations.

Assembly Redirects to the Rescue

.NET has a built-in system for runtime version forwarding which can be accomplished by way of Assembly Redirects the applications .config file.

For FoxPro application's this means you can put these assembly redirects into one of these files:

  • YourApp.exe.config
  • VFP9.exe.config

The config file is associated with the launching .EXE file, so that's either your standalone compiled application file, or the FoxPro IDE vfp9.exe.

The following is an example of .config file that forces JSON.NET usage of any version to version 8.0:

<?xml version="1.0"?><configuration><startup>   <!-- <supportedruntime version="v4.0.30319"/> --><supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.2" /><!-- supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" -->    <!-- supportedRuntime version="v2.0.50727"/ -->    </startup><runtime><loadFromRemoteSources enabled="true"/></runtime><runtime>    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"><dependentAssembly><assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" /><bindingRedirect oldVersion="0.0.0.0-6.0.0.0" newVersion="8.0.0.0" /></dependentAssembly></assemblyBinding>      </runtime></configuration>

The key element is the <dependentAssembly> which describes Newtonsoft.Json and basically redirects any version found - oldVersion - to the newVersion. The new version in this case is the greater version number between the one wwDotnetBridge provides (6.x which is fairly old) and whatever other higher version is in use. You can check the .dll version number in the File details (Explorer -> Right Click DLL -> Details). In the example, here I'm interfacing with a .NET Socket.IO client library that uses Newtonsoft.Json version 8 and that's what's reflected in the newVersion. Now when wwDotnetBridge asks to version 6.0 of JSON.NET, .NET redirects to version 8.0 and everything works as long as the interface of the library supports the same signatures of methods and properties that the code accesses.

This approach works for any .NET assembly (dll) where there might be multiple versions in place.

Assembly Redirects don't always Work

In the case of JSON.NET assembly redirects always work because the James Newkirk who's the creator of JSON.NET - with some serious nudging from Microsoft - has so far ensured that the base interfaces are not changed. While there are many new features in newer version of JSON.NET, all of the new features are implemented with custom parsers and serializers that plug into a pipeline. The result is that you can safely forward JSON.NET v6 to v9 and expect things to work.

This approach can however also fail if you have component that is not backwards and forward compatible. Many components change behavior and interfaces when major version changes happen and if a change affects an interface that you are calling then you can end up with runtime errors that are difficult to track down. Luckily, this is not a common scenario.

Summary

Version conflicts can be painful, and the error messages you get for version conflicts are often not very conclusive and seem to point to other issues (the usual error is: Unable to load dependent assembly) and the worst part is that usually the .NET error message doesn't provide any information on which sub-component failed to load.

The first line of defense are assembly redirects that you can specify in your application's .config file and in most common version conflict situations this the solution as is the case for JSON.NET version conflicts which is probably the most common one you might run into.

Persisting Static Objects in Web Connection Applications

$
0
0

Persisting Objects in Time in Web Connection

Web Connection Server applications are in essence FoxPro applications that are loaded once and stay in memory. This means they have state that sticks around for the lifetime of the application. Persistance in time…

Global State: The wwServer object

In Web Connection the top level object that always sticks around and is in effect the global object, is the wwServer instance. Any property/object that is attached to this instance, by extension then also becomes global and is effectively around for the lifetime of the application.

What this means is that you can attach properties or resources to your wwServer instance easily and create cached instances of objects and values that are accessible via the Server private variable anywhere in your Web Connection connection code.

This is useful for resource hungry components that take a while to spin up, or for cached resources like large look up tables or collections/arrays of values that you repeatedly need but maybe don't want to reload on each hit.

Attaching Application State to wwServer

There are a number of ways to attach custom values to the global wwServer instance:

  • Add a Property to your Server Instance
  • Use Server.oResource.Add(key,value)
  • Use Server.oResource.AddProperty(propname,value)

Adding Properties to wwServer Explicitly

You can explicitly add properties to your wwServer instance. Your custom wwServer instance is in MyAppMain.prg (Replace MyApp with whatever your appname is) and in it is a definition for a server instance:

DEFINE CLASS MyAppServer as wwServer OLEPUBLIC

oCustomProperty = null

PROTECTED FUNCTION OnInit

this.oCustomProperty = CREATEOBJECT("MyCachedObjectClass")
...
ENDFUNC

ENDDEFINE

The oCustomProperty value or object is loaded once on startup and then persists for the duration of the Web Connection server application.

You can then access this property from anywhere in a Process class as:

loCustom = Server.oCustomProperty

And voila you have a new property that exists on the server instance and is always persisted.

COM Interfaces vs new Server Properties

One problem with this approach is that the new property causes a COM Interface change to the COM server that is gets registered when Web Connection runs as a COM server. Whenever the COM interface signature changes, the COM object needs to be explicitly re-registered or else the server might not instantiate under COM.

So, as a general rule it's not a good idea to frequently add new properties to your server instance.

One way to mitigate this is to create one property that acts as a container for any persisted objects and then use that object to hang off any other objects:

DEFINE CLASS ObjectContainer as Custom
   oCustomObject1 = null
   oCustomObject2 = null
   oCustomObject3 = null
ENDDEFINE

Then define this on your wwServer class:

DEFINE CLASS MyAppServer as wwServer OLEPUBLIC

oObjectContainer = null

PROTECTED FUNCTION OnInit

this.oObjectContainer = CREATEOBJECT("ObjectContainer")
...
ENDFUNC

ENDDEFINE

You can then hang any number of sub properties off this object and still access them with:

loCustom1 = Server.oObjectContainer.oCustomObject1
loCustom.DoSomething()

The advantage of this approach is that you get to create an explicit object contract by way of a class you implement that clearly describes the structure of the objects you are ‘caching’ in this way.

For COM this introduces a single property that is exposed in the external COM Interface registered - adding additional objects to the container has no impact on the COM Interface exposed to Windows and so no COM re-registration is required.

Using oResources

The Web Connection Server class includes an oResources object property that provides a generic version of what I described in the previous section. Rather than a custom object you create, a pre-created object exists on the server object and you can hang off your persistable objects off that instance.

You can use:

  • AddProperty(propname,value) to create a dynamic runtime property
  • Add(key,value) to use a keyed collection value

.AddProperty() like the name suggests dynamically adds a property to the .oResources instance:

PROTECTED FUNCTION OnInit

this.oResources.AddProperty("oCustom1", CREATEOBJECT("CustomClass1"))
this.oResources.AddProperty("oCustom2", CREATEOBJECT("CustomClass2"))
...
ENDFUNC

You can then use these custom properties like this:

loCustom1 = Server.oResources.oCustom1

The behavior is the same as the explicit object described earlier, except that there is no explicit object that describes the custom property interface. Rather the properties are dynamically added at runtime.

Using .Add() works similar, but doesn't add properties - instead it simply uses collection values.

PROTECTED FUNCTION OnInit

this.oResources.Add("oCustom1", CREATEOBJECT("CustomClass1"))
this.oResources.Add("oCustom2", CREATEOBJECT("CustomClass2"))
...
ENDFUNC

This creates collection entries that you retrieve with:

loCustom1 = Server.oResources.Item("oCustom1")
loCustom2 = Server.oResources.Item("oCustom2")

This latter approach works best with truly dynamic resources that you want to add and remove conditionally. Internally wwServer::oResources method uses a wwNameValueCollection so you can add and remove and update resources stored in the collection quite easily.

Persistance of Time

One of the advantages of Web Connection over typical ASP.NET multi-threaded COM servers applications in ASP.NET where COM servers are reloaded on every hit, is that Web Connection does have state and the application stays alive between hits. This state allows the FoxPro instance to cache data internally - so data buffers and memory as well as property state can be cached.

You can also leave cursors open and re-use them in subsequent requests. And as I've shown in this post, you can also maintain object state by caching it on the wwServer instance. This sort of ‘caching’ is simply not possible if you have COM servers getting constantly created and re-created.

All this adds to a lot of flexibility on how manage state in Web Connection applications. But you also need to be aware of your memory usage. You don't want to go overboard with cached data - FoxPro itself is very good at maintaining internal data buffers, especially if you give it lots of memory to run in.

Be selective in your ‘caching’ of data and state and resort to caching/persisting read-only or read-rarely data only. No need to put memory strain on the application by saving too much cached data. IOW, be smart in what you cache.

Regardless, between Web Connection's explicit caching and FoxPro's smart buffering and memory usage (as long as you properly constrain it) you have a lot of options on how to optimize your data intensive operations and data access.

Now get too. Time's a wastin'…

this post created with Markdown Monster

GAC Assemblies with wwDotnetBridge

$
0
0

wwDotnetBridge allows you to load random .NET assemblies from the local machine by explicitly referencing a DLL (assembly) file from disk.

It's as easy as using the LoadAssembly() method to point at the DLL, and you're off to the races:

loBridge = GetwwDotnetBridge()

*** load an assembly in your path
IF (!loBridge.LoadAssembly("Markdig.dll"))
   ? "Couldn't load assembly: " + loBridge.cErrorMsg
   RETURN
ENDIF   

loMarkdig = CREATEOBJECT("Markdig.MarkdownPipelineBuilder")
* ... off you go using a class from the assembly

Assemblies are found along your foxpro path, relative paths from your current path (.\subfolder\markdig.dll) or of course a fully qualified path.

GAC Assemblies

Things are a bit more tricky with assemblies that live in the Global Assembly Cache (GAC), which is a global registry of 'global' .NET assemblies. Although the GAC has lost a lot of its appeal in recent years with most components migrating to NuGet and local project storage as the preferred installation mechanism, most Microsoft assemblies are "GAC'd" and of course all the base framework assemblies all live in the GAC.

.NET assemblies that are signed have what is known as a Fully qualified assembly name which is the name by which any assembly registered in the GAC is referenced. To load an assembly from the GAC the preferred way to do that is to use this special name.

Here's what it looks like for loading the Microsoft provided System.Xml package for example:

loBridge.LoadAssembly("System.Xml, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089")

The GAC is nothing more than a special folder in the C:\Windows\Microsoft.NET\assembly folder that is managed by the .NET Framework. This folder hierarchy contains assemblies that are laid out in a special format that ensures uniqueness of each assembly that lives in the GAC by separating out version numbers and sign hashes. Go ahead and browse over to that folder and take a look at the structure - I'll wait here. Look for some common things like System, System.Xml or System.Data for example.

wwDotnetBridge provides a few common mapping so you can just use the assembly name:

else if (Environment.Version.Major == 4)
{
    if (lowerAssemblyName == "system")
        AssemblyName = "System, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089";
    else if (lowerAssemblyName == "mscorlib")
        AssemblyName = "mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089";
    else if (lowerAssemblyName == "system.windows.forms")
        AssemblyName = "System.Windows.Forms, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089";
    else if (lowerAssemblyName == "system.xml")
        AssemblyName = c";
    else if (lowerAssemblyName == "system.drawing")
        AssemblyName = "System.Drawing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a";
    else if (lowerAssemblyName == "system.data")
        AssemblyName = "System.Data, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089";
    else if (lowerAssemblyName == "system.web")
        AssemblyName = "System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a";
    else if (lowerAssemblyName == "system.core")
        AssemblyName = "System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089";
    else if (lowerAssemblyName == "microsoft.csharp")
        AssemblyName = "Microsoft.CSharp, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a";
    else if (lowerAssemblyName == "microsoft.visualbasic")
        AssemblyName = "Microsoft.VisualBasic, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a";
    else if (lowerAssemblyName == "system.servicemodel")
        AssemblyName = "System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089";
    else if (lowerAssemblyName == "system.runtime.serialization")
        AssemblyName = "System.Runtime.Serialization, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089";
}

which means you can just reference LoadAssembly("System.Web"). For all other assemblies however you have to use the fully qualified assembly name.

wwDotnetBridge internally references a number of assemblies so these don't ever have to be explicitly referenced:

Assemblies referenced by wwdotnetbridge

When do I need GAC References?

Earlier today I got a question from Alex Sosa asking about the fact that he had to reference a long path to a system specific location to find a GAC'd assembly to reference.

In this case he's trying to use the Powershell automation interface to talk to Powershell. The following code works, but it hardcodes the path the physical assembly:

loBridge = createobject('wwDotNetBridge', 'V4')

llReturn = loBridge.LoadAssembly('C:\Windows\Microsoft.Net\assembly\' + ;
'GAC_MSIL\System.Management.Automation\v4.0_3.0.0.0__31bf3856ad364e35' + ;
'\System.Management.Automation.dll')
if not llReturn
  messagebox(loBridge.cErrorMsg)
  return
endif 

* Create a PowerShell object.
loPS = loBridge.InvokeStaticMethod('System.Management.Automation.PowerShell','Create')

This code works, but it hard codes the path which is ugly and may change if the version is changed or if Windows is ever moved to a different location (like after a reinstall for example). It's never a good idea to hardcode paths - if anything the code above should be changed to use the GETENV("WINDIR") to avoid system specific paths.

Use Fully Qualified Assembly Names instead

But a a better approach with GAC components is to use Strong Assembly Names. Any GAC assembly has to be signed and as a result generates a unique assembly ID which includes its name, version and a hash thumbprint. You see part of that in the filename above.

Here's what this looks like:

llReturn = loBridge.LoadAssembly('System.Management.Automation, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35')

To get the fully qualified assembly name, you can use any Assembly Viewer tool like Reflector (which is what I use typically), JustDecompile from Telerik or you can use the Visual Studio assembly browser or ILSpy (also part of Visual Studio tools).

Any of these tools work, and here's what this looks like in Reflector:

You basically have to load the assembly into the tool, and then looks at the information properties for the assembly.

The advantage of the fully qualified assembly name is - especially for Microsoft assemblies - that the name rarely if ever changes (other than perhaps the version number). Even if the file name or even the actual file version changes, the assembly version and what's actually in the GAC is always correct with the fully qualified assembly name.

So rather than hardcoding a file name, which may change in the future you are now pinning to a specific version of a GAC entry, which tends to stay stable.

Summary

For GAC assemblies using fully qualified assembly names is the right way to go as this is the 'official' and fastest way .NET loads assemblies from the GAC.

Keep in mind though that even though the GAC is a global assembly cache, there's no guarantee that your assembly you are referencing is there. The Powershell assembly referenced above for example, may not be present if Powershell is not installed (it is an option in Windows Features even though it's on by default).

GAC assemblies are generally more problematic than loose assemblies due to their strong signing and stric dependency management rules, and luckily they are on their way out, so other than Microsoft system assemblies their use should be fairly rare these days with most third party assemblies being shipped as loose assemblies via NuGet packages which provides a lot more flexibility to developers.

But either way, wwDotnetBridge can load loose or GAC'd assemblies easily enough. Have at it!

New Web Connection Server Startup Features in 6.15

$
0
0

Inspired by a recent discussion with Brett Carraway on the message board I've made a bunch of changes in the way Web Connection's startup code works.

Startup code debugging - especially in production and in COM Mode - has been notoriously difficult to debug because of the way Web Connection fires server initialization code that happens as part of the server's Init() constructor. Due to this implementation getting error information about startup failures - especially under COM - can be difficult to track down, because there's no instance on an error, there's also no error information available to pass back to the COM host.

Because startup code fires in the constructor you often just end up with nondescript COM errors like Unable to retrieve Server Instance or COM Server Execution Error.

A quick Refresher on Web Connection Server Loading

Web Connection exposes both an OnInit() and OnLoad() event hook that you can hook user code to. Both methods fire once on server startup. OnInit() is meant for low level server configuration - basically where to find the configuration file. You rarely change this method from what the default template generates. OnLoad() is meant for letting you configure your application's environment, which means setting paths with SET PATH, loading classes and procedure files, opening global connections, mapping drives etc. - all the stuff your application needs. Currently, if there's an error in your user code, the server will never load, and you will get a nondescript error message.

New Feature: Separating OnInit() and OnLoad()

The new features coming in Web Connection 6.15 separate the initial server instantiation code in OnInit(), and the more application specific application configuration in OnLoad() into separate operations. OnLoad() now fires as part of the request pipeline, when the first request comes in, rather than as part of the constructor. Since most of the processing that can fail during startup is likely to be application specific code this will make it much easier to trap and report on errors that occur during the OnLoad() phase. OnInit() will still fail as before since they are still running inside of the constructor.

Don't worrry - these changes won't affect application level code. If you have existing OnInit() or OnLoad() code, that code will continue to run as is - no changes required. The changes only affect the underlying Web Connection Server plumbing and not your code.

Before I look at the new features and what they do in more detail, let's discuss the issues related to startup code debugging.

Startup Code Issues in Web Connection

Runtime startup code debugging is one of the trickiest things with Web Connection. Why? Because of the way Web Connection works, the entire server bootstrap and load process occurs in the Init() constructor code of the FoxPro server. This means if anything goes wrong at all, the server will not load, and more importantly, the server has no way to report error information back to the Web Connection Module and ultimately the IIS page that gets generated. When errors occur during initialization you typically just get nondescript COM errors and 500 error page from IIS. Not so helpful that.

All of this happens because Web Connection currently fires OnInit() and OnLoad() in the FoxPro Init() constructor function. Any code failure in Init() in FoxPro fails to produce an instance, and when running the server in COM mode produces a COM Error that doesn't forward any information about what went wrong. While Web Connection can and does capture exceptions in the load sequence, it has no way to report that error back to the Web Connection module because no instance has been created yet. Even COMRETURNERROR doesn't work, because it needs to have an instance in order to work first.

So in current versions the only way to get error information is via logs. Startup errors are logged in wcTracelog.txt in the server EXE's folder (assuming permissions are available) as well as in the wwRequestLog. Today you can also use Server::Trace() to write Debug statements to the wcTracelog.txt file for debug type log output that can often help trace where code is reaching and what current values are. It works but is tedious if you're flying blind.

  • Development Mode - Code stops on error
    While in development that's usually not a problem. If something goes wrong you see the error and FoxPro stops which is perfect. You can see what happens and you can step through the code as necessary. This works fine and was never really a problem.

  • Production File Mode Startup - Error Dialogs
    Running in File Based mode will still produce an exception that can often shows up as a WAIT WINDOW on the interactive desktop instance of the Web Connection server. While that's not super helpful, it's better than nothing and often can give you a clue as to what's happening.

  • Production COM Server Startup - No Error Info
    COM Servers are even worse. Because the code that runs OnInit() and OnLoad() behaviors in the FoxPro Server's Init() constructor, an error anywhere in that code simply causes the server loading to fail with what amounts to an unknown COM error. Even though Web Connection captures and tracks the error that occurred, it can't report the error in any way because there's no instance to read the error information off of.

In either COM or file based modes if you're running in production you often don't have easy access to the server, so even if there some sort of error message displayed, you may not see it. With COM, typically there's no interactive UI so there's nothing to see.

That leaves the log files which again you have to look at on the server to debug.

Current Version Workarounds for Startup Errors - Tracing

In Web Connection 6.0 I introduced server tracing via the Server::Trace() method. There are two modes for this:

  • Automatic Server Error Logging in wcTracelog.txt
  • Trace Logging for your own code

Automatic Trace Error Log

Web Connection automatically writes to the log file if a server load error occurs. As long as your server has rights to write in the server's startup folder wcTracelog.txt is filled with the error info and usually that will give you a good clue what's going wrong. But this file lives on the server and you have to access it and when you fire up your server at first you still end up with a nasty and unhelpful COM error.

Error tracelog output looks something like this:

07/01/2017 06:54:09 PM - Processing Error - Server Startup<PRE>
     Error: 1925
   Message: Unknown member OSQL.
   Details: OSQL
            Code inside of Server OnLoad() failed
      Code: this.oSql.Execute("select * from users")
   Program: onload
   Line No: 167

Handled by: Wcdemoserver.OnError()<br></PRE>

Explicit Trace Logging by your Code

If you know you have a problem in your startup code and you need to debug it in a production application - because 'it works on my machine', but not on the server of course - you can use the Server.Trace() method to write out state messages that let you see how far your code gets and lets you echo out messages.

Tracelog output can be written like this (in Server code in OnLoad() or OnInit):

this.Trace("Entering OnInit()")

and output looks something like this:

07/01/2017 06:54:09 PM - Entering OnInit()
07/01/2017 06:54:09 PM - Completed OnInit()
07/01/2017 06:54:09 PM - Completed OnLoad()
07/01/2017 06:54:09 PM - Initialized Sql

Using tracing it's possible to debug server code, but it's not optimal of course. In order to make any changes to the log output you have to change your code and you have to recompile and redeploy your server, which can be a pain.

Web Connection 6.15 Improvements

In Web Connection 6.15 the way Web Connection fires the OnLoad() event changes. No longer will the event be fired as part of the server's constructor/Init() code. Rather Web Connection now fires the OnLoad() as part of the first request that is directed against the server. There's now a flag in the server class that does essentially this:

*** Now do server load Configuration
IF (this.lInStartup)
   THIS.OnLoadInternal()
ENDIF

The lIsInStartup flag is .t. initially and set to .f. as part of LoadInternal(). LoadInternal() is a wrapper around your OnLoad() hook method, which traps errors and if an error occurs handles that error by logging it and setting a lStartupError flag.

This flag is checked just before the request would normally be processed:

IF (this.lStartupError)   
    lcHtml = this.ErrorMsg(;"Fatal Server Startup Error",;"<p>The server failed to load on startup. Please fix the following error in the server's startup code.</p>" + ;"<p>An error occurred during OnLoad(): "+ this.cErrorMsg) + "</p>"
	this.SendServerResponse(lcHtml)	
ELSE
   ****************************************************************
   *** NOW CALL THE USER CODE - PROCESS IS NON FRAMEWORK ENTRYPOINT 
   ****************************************************************
   IF THIS.lDebugMode OR this.lUseErrorMethodErrorHandling 
      .Process()
   ELSE
      TRY 
         .Process()         
      *** IN theory no errors should occur here
      *** because wwProcess handles its own errors
      CATCH TO loException
         THIS.OnError(loException)
      ENDTRY
   ENDIF   
ENDIF

If an error occurs you then see an error page like this:

This error page is permanent - any subsequent requests will display this error until the startup error is fixed.

Under the old behavior the server would have never loaded - now the server has loaded but the error information is now available and can be displayed on this error page and you know what to fix at this point. Request data is still logged.

Faster COM Server Loading

Because COM servers no longer have to hit your OnLoad() code before returning an instance to COM, COM Servers can load more quickly. Instances pop up much quicker and are ready to process requests before all servers have fired their OnLoad() code. This is especially true if you have time consuming startup or initialization code in OnLoad() which is now delayed until the first hit goes against the server. This can also get around timeout issues with slow load code (which previously was capped at 10 seconds for all servers as a group).

Parallel COM Server Loading

In addition the new version now loads COM servers in parallel so servers load much quicker as code no longer has to wait for each of the servers to load in sequence.

Combined with the reduced startup overhead of not having to run the OnLoad() code, server initialization is significantly faster than before especially for servers that have slow loading initialization code for setting up things like first SQL connections or remote share file access.

Related Improvement: Dynamic Instance Configuration on Module Admin Page

Since I've been dealing with instance performance improvements another small enhancement comes in the form a new Web Connection Module Administration configuration option on the .NET Module admin page: You can now dynamically configure the instance count right on the Admin page:

This option updates the ServerCount configuration and immediately updates the pool count.

.NET Module Only

This feature works only with the .NET Module. The ISAPI DLL does not support this functionality.

Locked and Loaded

These improvements have come up in discussions on a few occasions in the past and I hope they will make life easier for you. Debugging startup errors is one of a few common issues I frequently get support calls for, so hopefully this will cut down on that end of things.

These features will be provided in Web Connection 6.15 which is scheduled for later this month.

this post created with Markdown Monster

Web Connection 6.15 is Here

$
0
0

It's time for another Web Connection update with version 6.15 hitting the release mark. This version adds a number of administrative feature updates related to server loading and server management.

Here are the highlights:

  • New Install-IIS-Features Powershell Script
  • COM Server Loading Changes
  • Server Count can now be Interactively set on Admin Page
  • Exe Servers hotswapped now automatically re-register for COM

New Install-IIS-Features Powershell Script

Web Connection now ships with a new Powershell script that can more easily install IIS on your local machine, either for desktop or server versions of Windows. Powershell has access to script automation features that can automate the Windows Feature/Role installation that is required to properly install IIS. The provided script installs IIS with the features that are required to run Web Connection and is much quicker than manually using either the Windows desktop Add Windows Features or the server Add Server Role GUI installations.

  • Open an Administrator Powershell Command Prompt
    PS> cd \wconnect    
    PS> .\Install-IIS-Featured.ps1

This Powershell script enables the IIS Web Server role and adds all required features and once complete you can then install Web Connection or create new applications for IIS.

For more info on what this script does check out my recent blog post:

COM Server Loading Changes

Version 6.15 changes the way Web Connection servers execute their startup code. Specifically it removes the OnLoad() functionality from the FoxPro INIT call that was previously load to run both OnInit() and OnLoad() handling.

While the INIT code works fine if there is no error, because requests ran in the FoxPro INIT constructor, any error would cause a COM Error that couldn't communicate any sort of error information back to the Web Connection module or ISAPI extension. The end result was that any startup code error would fail with a very cryptic and non-helpful COM error.

In v6.15 and later OnLoad() is now delay fired as part of the first request that hits the server. Because the code is no longer part of the FoxPro class constructor, the error can be trapped by FoxPro and Web Connection can actually display an error message like this:

Figure 1 - COM Server startup failures now display an error message.

Once a startup error occurs, further operation of the server is capped, and all subsequent requests display this cached error message until the problem is fixed. So startup errors still cause the server to fail, but you at least get a reasonable error message about what failed and a clear indication that the problem is in the OnLoad() of the server.

Server Load Performance Benefits

This change also makes server loading considerably more efficient especially for those applications that have heavy initialization code. If you need to initialize lots of objects, create database connections or map drives all of which takes time, these operations are now removed from the immediate server load sequence which can time out very quickly. This means that actual server load time is generally much quicker and if a single request comes in the server pool can be available much more quickly than before.

For more info check out my recent post:

  • (New Web Connection Server Startup Features in 6.15)[New Web Connection Server Startup Features in 6.15]

Parallel Loading of COM Servers

When loading multiple COM servers, Web Connection now loads the server pool in parallel rather than sequentially. This also helps in reducing the time it takes to get the first request ready for processing.

This works by loading each server on their respective threads simultaneously (using the .NET Parallel Task Library) which means multiple servers are loaded at the same time on their respective Thread Pool threads. This reduces load time for larger pools significantly, especially in combination with the above mentioned new server load features.

This feature works only with the Web Connection .NET Module, not with ISAPI.

Server Can now be interactively set on the Admin Page

We've also added a new feature to the Web Connection .NET Module admin page that now lets you set the server count interactively, using a text box on the admin page:

Figure 2 - COM Server counts can now be dynamically set on the Module Admin page.

Using this new feature you can at any time change the server count remotely with just a few key strokes.

Admin Changes require Write Access for the Web Account

Any of the admin features that change settings write changes to web.config in the Web folder so you need to ensure that the Web account has write permissions. The account that's in use is the Application Pool account and shows on the Admin page as the Server Account.

Automatic -regserver on HotSwapped COM Servers

Web Connection has supported hotswapping of servers by way of uploading a new executable to a configured upload file and then hotswapping the server when running in COM mode or in standalone File mode. Web Connection has the ability to upload your file and then hotswap the server, by first unloading all instances, holding requests, copying the new file and then re-starting the hold requests flag. This process can be very quick depending on how many servers you need to start up and can be performed without any sort of server shutdown.

In v6.15 we've added the additional step of registering the the hotswapped exe using the yourServer.exe -regserver switch to ensure that if there were any changes to the COM server's type library due to wwServer interface changes, that these changes are reflected in the type library. In the past, not re-registering after changes could on occasion cause COM servers to fail as the COM member sequences between the registered type library and the actual type library got out of sync.

With automatic registration this should no longer be an issue as the server is always re-registered even if it reregisters the same exact server interface.

Next to No Developer Impact

All these changes in 6.15 deal with a few issues related to server loading under COM that have come up in the past and have finally been addressed. None of these changes require any changes on the developer's part - they are under the hood improvements.

Possible Side Effect: No OnLoad() before First Hit

The only issue that may have some possible effect is the OnLoad() changes that now fire on the first incoming request rather than part of the FoxPro class INIT() constructor. If you're using stock Web Connection servers generated by the templates or the examples, there's no issue.

If you are doing things like running your own background timer, or firing additional code off the Web Connection timer, it's possible that the code you're firing requires that the server is fully initialized, but OnLoad() may have not run yet.

If that's the case you can work around this by explicitly checking and firing the the OnLoad() code yourself as part of your timer or other inline code as a precaution:

IF (this.lInStartup)
   THIS.OnLoadInternal()
ENDIF	

This ensures that OnloadInternal() has not fired yet definitely gets fired. The lInStartup flag gets updated once OnloadInternal() has fired so this code runs only if the flag is set.

Locked and Loaded

These improvements have come up in discussions on a few occasions in the past and I hope they will make life easier for you. Debugging startup errors is one of a few common issues I frequently get support calls for, so hopefully this will cut down on this particular problem area and the rest of the improvements make for generally faster server start up in COM mode and more reliable error information if something goes wrong.

Enjoy.

this post created with Markdown Monster

Calling async/await .NET methods with wwDotnetBridge

$
0
0

I got a question on the message board a week ago regarding calling async .NET methods using wwDotnetBridge. My immediate suspicion was that this probably wouldn't be possible since async code on .NET usually uses generics and requires special setup.

However, as it turns out, you can call async methods in .NET with wwDotnetBridge. In this post I describe how async/await methods work in .NET and how you can call them from FoxPro with wwDotnetBridge.

How async/await works in .NET

The async and await pattern in .NET seems like magic - you make a request to a function that processes asynchronously, which means the code runs in the background and then continue processing the code as if it was running synchronously. async and await code looks and feels just like synchronous code but behind the covers, the code actually runs asynchronously.

.NET does this via some magic with the compiler that effectively re-writes you linear code into a state machine. That generated code essentially creates the dreaded async pyramid of doom that nobody wants to code up, but hides it behind generated compiler code - you never look at the series of continuations.

At a lower level, .NET uses the Task or Task<t> class API which is like a .NET version of a promise. Task is essentially task forwarder, that calls a method asynchronously, then handles the callback and provides a Result property that has a result value (if any). There are options to check completion status as well as methods that can wait for completion. In fact you can just choose to wait on .Result which is a blocking getter that won't return until the result returns.

Task is the low level feature - async and await is language sugar built around the Task object that essentially builds a state machine that waits for completion internally and includes checks and methods that can check the current state of the async request. Methods exist to wait for completion, to continue processing with the result (.ContinueWith() which is what async uses) as well as a .Result property that blocks until the result is available.

In essence, async and await chops up linear code into nested blocks code that continue in a linear fashion. For the developer the beauty of async await is that it looks and behaves mostly like linear code while running asynchronously and freeing up the calling thread.

An example of Async/Await in .NET

Let's say I want to make an HTTP call with System.Net.WebClient which has a number of async methods.

public async Task<string> MakeHttpCall(string url)
{
    var client = new WebClient();
    string http  = await client.DownloadStringTaskAsync(url);
    return http;
}

Remember the magic here - the compiler is doing a bunch of stuff to fix up this code. Note in order for asyncawait to work the method called has to be async to start with, which means the caller has to be calling asynchronously. Async await can be a real rabbit hole as it worms its way up the stack until reach a place where an Async method can be started (usually an event or server generated action). Another way to is to Task.Run() to kick of your own Task to kick of an async operation sequence.

Also note the compiler magic that makes it possible for the method to return Task<string>, but the code actually returning a string. Async methods automatically fix up any result type into a task so the string result becomes Task<string>.

To be clear when we want to call an async method from FoxPro, we can't use this same approach, but there are other ways to retrieve results from Async call in our non-async capable, event less Foxpro environment.

It's also possible to call async methods without using await. As seen above, an async method is really just a method that returns a Task. So WebClient.DownloadStringTaskAsync() - which is an async method that normally is called with asyncawait, can also be called like this:

public string MakeHttpCall(string url)
{
    var client = new WebClient();
    var task = client.DownloadStringTaskAsync(url); // returns immediately
    // waits until Result is available
    string http = task.Result;
    return http;
}

Here the code is directly working with the lower level Task API and it uses the .Result property to wait for completion. If .Result is not ready yet, retrieving .Result blocks and waits for completion of the async task before the value is returned.

This pretty much defeats the purpose of async since we end up waiting for the result, but keep in mind that you do have the option of running other code between the retrieval of the Task and getting the Result property.

This code looks like something that we can call with wwDotnetBridge.

Calling an Async method with wwDotnetBridge

And as it turns out we can in fact call DownloadStringTaskAsync() with FoxPro code just like this:

do wwDotNetBridge
LOCAL loBridge as wwDotNetBridge
loBridge = CreateObject("wwDotNetBridge","V4")

loClient = loBridge.CreateInstance("System.Net.WebClient")

*** execute and returns immediately
loTask = loBridge.InvokeMethod(loClient,"DownloadStringTaskAsync","https://west-wind.com")
? loTask  && object

*** Waits for completion
lcHtml = loBridge.GetProperty(loTask,"Result")
? lcHtml

And this works just fine.

Note that you have to call the async method indirectly with InvokeMethod() and you have to retrieve the Result value from the Task<T>Result using GetProperty(). This is required because both the method and the result property use .NET generics and those can't be called directly through COM interop and require wwDotnetBridge's indirect processing. But it works! Yipiee!

wwDotnetBridge - More that you think!

I was pretty convinced that this wasn't going to work, but in hindsight it makes perfect sense that it does. Async methods are just methods that return a Task object, which can be accessed and manipulated like any other object in .NET and therefore with wwDotnetBridge. The main consideration for wwDotnetBridge is that Task<t> is a generic type and requires indirect access using InvokeMethod() to call the async method, and using GetProperty() to retrieve the Result property.

Be careful

All that said, I'm not sure if it's a great idea to actually do this. Async methods run in the background and potentially on background threads and Microsoft strongly recommends you don't use .Result to wait for completion. They are of the "don't call us we call you!" persuasion, by using ayncawait, or by using Task continuations (ie. .ContinueWith(result) which is something we can't do directly with wwDotnetBridge (can't create delegates).

However, if you are running inside of FoxPro over COM (as we are with wwDotnetBridge) there's already thread marshalling happening that should prevent any thread conflicts from manifesting with async code. Running a few tests firing off 10 simultaneous requests and collecting them seems to work reliably even for long runs. Still make sure you test this out to make sure you don't run into thread lock up or corruption. Check, test and be vigilant if you go down this path.

So there you have it: Task Async methods with wwDotnetBridge are possible. More APIs to connect with for FoxPro. Rock on!

this post created and published with Markdown Monster

West Wind Web Connection 6.17 released

$
0
0

Time for another West Wind Web Connection update with version 6.17. This release is mostly a maintenance release that fixes a few small bugs and updates a few existing features with minor enhancements.

Better Error information for wwDotnetBridge Load Errors

wwDotnetBridge now will report error information when it fails to load. Previously this error information wasn't forwarded from C++ loader code due to some Unicode conversion issues.

This should make it much easier to debug wwDotnetBridge loader errors. The most common errors are related to missing assemblies or blocked access denied errors which should be clearer now. Small feature but probably very useful to those getting started and running into problems.

Browse Site link on the Web Connection Server Form

The Web Connection Server window now has a Browse Site button that will open the system browser to the configured HomeUrl configuration value.

The value is set in new projects to the configured local development Web root. But you can customize this value using the HomeUrl value in MyApp.ini:

HomeUrl=http://localhost/MyApp/

You can also temporarily set this URL to a specific page that you are working on so you can quickly jump to that page from within FoxPro.

The form is now also properly resizable with the request links properly adjusting in the requests list.

JsonSerialize() and JsonDeserialize() Global Functions

As you probably know Web Connection includes the wwJsonSerializer Class class and makes it pretty easy to serialize and deserialize data.

However, you still need to create an instance of an object, set a couple of options and then use the serialize method. These two methods make it even easier to do JSON serialization with a single function call so you can more easily serialize data for debugging or embedding into scripts and templates.

Here's what the new code looks like:

DO wwJsonSerializer && load libs

lcCustomerJson = JsonSerialize(loCustomer)
loClonedCustomer = JsonDeserialize(lcCustomerJson)

Assert(loConedCustomer.LastName == loCustomer.Lastname)

New Process Class Template Updates

When Web Connection creates a new project or a new process class, it uses a stock template (Templates\process.prg) from which to create the new process class.

In this update the class now defaults to nPageScriptMode=3 which executes MVC scripts via Response.ExpandScript() when no matching process/controller method can be found. Scripts are the preferred mechanism for building MVC style applications. Previously the mode was 1 which used templates. The problem with that was that people would expect the full feature set of scripts that would then not work. Switching the default should cause less confusion for new users.

Of course if you prefer to have 'headless' Views fire templates (Response.ExpandTemplate()) you can just just manually set the value back to 1.

Build Script Updates

The build.bat file generated into a new project folder to create a packaged application ready for deployment has been updated to create a more complete deployment package.

Previously the script only pulled the EXE and all the Web Connection system dlls required to deploy the binary folder.

The updated script adds the Web and Data folders by default and creates the same structure as the project with Deploy/Web/Data top level folders.

This should make it easier to deploy applications for the first time which is the primary purpose of the build script. Subsequent udpates are better handled with Web Deploy (in Visual Studio or with MsBuild) for Web content and the bld_MyApp.prg application updater for your application's binary.

It can also be useful when updating Web Connection versions as the build script pulls all the dependent Web Connection DLLs from the install folder into the Deploy folder. For updates you might want to modify the script by removing the Web and Data folder creation.

This is another little gem in the feature set - while it's easy enough to create a batch or Powershell script yourself, having something out of the box to get you started is a big motivator to have an automated way to package your app. If you haven't done it before give it a try.

If you have an old application, you can also check this out by creating a new project and picking up the generated build.bat and then copying it into your older project. Most of the generated script is boiler plate with a couple of variables at the top to point at the right paths. You can modify the script to fit your needs easily once you know what you should be copying.

Improved COM Server Load Times

COM Servers now load slightly faster as the COM load sequence has been optimized further after the last release's parallelization of server loading onto pool threads. Servers are now loaded onto MTA threads (rather than STA), which provides faster startup and less overhead in ASP.NET request processing.

The switch over from STA to MTA required a couple of small changes, but the benefit here is that this removes any requirement for STA threads in your ASP.NET Web application which reduces resource usage and better scalability under heavy load.

Summary

All in all version 6.17 is a small update with only very minor fixes. There are a handful other small tweaks and performance improvements not worthy of much explanation here. This is a good thing - we don't need major updates and features in every update, and this release has been pushed out to address a few small bugs that needed addressing .

Enjoy the calm of this release...

Related

this post created and published with Markdown Monster

Shutting down file-based Web Connection Instances with WM_CLOSE Messages

$
0
0

Recently we had a long discussion regarding killing a specific file based instance in a West Wind Web Connection application. Not just killing actually but issuing a 'controlled' shutdown of the instance. The scenario in this post is that the customer is having an issue with an application that is leaking memory and he needed to detect the memory leakage in a particular instance and be able to shut down that instance and restart a new one.

One issue that came up as part of this thread is the idea that file based instances cannot be shutdown externally...

File Based Shutdowns

When you run West Wind Web Connection in file based mode, Web Connection runs as a standalone FoxPro Forms application that has a READ EVENTS loop. The form that pops up when you run the server is the UI that holds the server in place and the READ EVENTS essentially is what keeps the application alive. When the READ EVENTS loop ends when you close the form - so does the application.

Generally speaking Web Connection file based applications can't be killed externally, short of explicitly running a Windows TASKKILL operation or by explicitly exiting the form.

If you've ever tried to shut down a running Web Connection application from the Windows Task bar with the Close command you know that that doesn't work as expected. You get the following message from the FoxPro window or your application if it's running outside of the IDE:

which is pretty annoying.

Fixing the Shutdown Issue

There's a pretty easy workaround for this issue. As stated above the problem here is that Web Connection is sitting inside of a READ EVENTS loop and that's what's forcing the application to stay up when Windows close command is sent to the FoxPro application window.

What's needed to fix this is to intercept the WM_CLOSE operation that Windows sends to shut down the application and explicitily force the application to release its READ EVENTS loop with CLEAR EVENTS.

FoxPro supports hooking into Window Messages via the BINDEVENTS() function and to do all of this just takes a few lines of code.

To start I added a ShutDown() method to the wwServer base class. This will become part of Web Connection but if you want to implement this now you can just add the ShutDown() method to your wwServer subclass.

************************************************************************
*  Shutdown
****************************************
***  Function: Used to shut down a FILE BASED application
***    Assume: Has no effect on COM based applications
***            as COM servers can only be closed externally
***            by the COM reference
***    Params: Accepts Windows SendMessage parameter signature
***            parameters aren't used but can be if necessary
************************************************************************
FUNCTION Shutdown(hWnd, Msg, wParam, lParam)

IF !THIS.lComObject
    CLEAR EVENTS
    QUIT
ENDIF

ENDFUNC
*   Shutdown

Note that there's a check for lComObject in this method. This whole Windows message and remote shutdown mechanism only works with file-based operation. In COM it's impossible to shut down instances remotely short of TASKKILL as the COM reference from the host controls the application's lifetime.

In file based however we can respond to WM_CLOSE events and then call the wwServer::Shutdown() method which effectively clears the event loop and then explicitly quits.

Next we need to hook up the WM_CLOSE message handling using BINDEVENT().

In its simplest form you can do in the YourAppMain.prg in the startup code at the very top of the PRG and wrap the BINDEVENT()\UNBINDEVENTS() call around the READ EVENTS call like this:

WM_CLOSE = 0x0010  && in wconnect.h or Foxpro.h
BINDEVENTS(Application.hwnd,WM_CLOSE,goWCServer,"ShutDown")

READ EVENTS

UNBINDEVENTS(goWCServer)

To take this a step further I added the code directly into the Web Connection wwServer class.

The first is at the very bottom of the wwServer::Init() method:

IF !THIS.lComObject AND _VFP.Visible
   BINDEVENT(_VFP.hWnd, WM_CLOSE, THIS, "ShutDown")
ENDIF  

and also in the wwServer::Dispose() method:

IF !THIS.lComObject AND _VFP.Visible
	TRY
		UNBINDEVENTS(THIS)
	CATCH
	* this may fail when shutting down an EXE but not in the IDE
	* we don't care about the failure as this shut be a shutdown operation
	ENDTRY
ENDIF

The latter code only fires when the form is shut down normally using the exit button - if the BINDEVENT handler actually fires the app is immediately shut down.

Testing Operation

To test this out one of the easiest ways to do this is to use to start your Web Connection application in file mode from within Visual FoxPro's IDE and then use the Task bar icon and Close Window from there.

This is what generated the Can't quit Visual FoxPro message before, but now with the BINDEVENT() code in place the Web Connection server will actually shut down. Yay!

If you want to do this programmatically a very simple way to do this is using .NET Code and LinqPad which you can think of like the FoxPro command window for .NET. There you can easily iterate over all the processes running, check memory usage and more.

void Main()
{
    // var proc = .FirstOrDefault(p => p.MainWindowTitle.Contains("Web Connection"));
    foreach (var proc in Process.GetProcesses())
    {
        if (proc.MainWindowTitle.Contains("Web Connection"))
        {
        	proc.Dump(); // show object info (below)
        	if (proc != null)   // && proc.PrivateMemorySize > 20000000)
        		proc.CloseMainWindow();
        }
    }
}

This makes it very easy to create a tool that can remotely look for Web Connection instances that have too much memory and attempt to shut them down.

Because this is just simple .NET Code you can also run something similar using FoxPro code using wwDotnetBridge:

do wwDotNetBridge
loBridge = GetwwDotnetBridge()

*** Returns a ComArray instance
loProcesses = loBridge.Invokestaticmethod("System.Diagnostics.Process","GetProcesses")

*** Note most .NET Arrays are 0 based!
FOR lnX = 0 TO loProcesses.Count -1
   *** Access raw COM Interop objects
   loProcess = loProcesses.Item(lnX)
   lnMemory = loProcess.PrivateMemorySize
   
   IF ATC("Web Connection",loProcess.MainWindowTitle) > 0
       loProcess.CloseMainWindow()
   ENDIF
ENDFOR

CloseMainWindow() is the same as using the Close Window and it's a soft shutdown of the application which shuts down somewhat orderly. In order for this to work you need to be running as the same user as the window you're trying to shut down or as an Admin/SYSTEM account that can access any account's desktop.

If CloseMainWindow() is not enough you can also call the Kill() method which is a hard TASKKILL operation that immediately shuts down the application.

It's important to understand that either of these operations will cause an out of band event in FoxPro meaning it will interrupt code executing in between commands. IOW, there's no guarantee that the application will shut down after say a Web Connection request has finished. In order to do this more logic is needed to set a flag that can trigger a shutdown at the end of a request.

More Caveats - Top Level Forms don't receive WM_CLOSE

The above code patch fixes the Can't quit Visual FoxPro message, which is useful. I can't tell you how often I've cursed this during development or when shutting down Windows.

But it this approach has limitations. If you're running a FoxPro application without a desktop window active, the WM_CLOSE message is never properly sent to either the _VFP desktop or even the active FoxPro Top level form. FoxPro internally captures the WM_CLOSE event and shuts the application down before your code can interact with it.

For Web Connection this means when you're running with Showdesktopform=On (which runs a FoxPro top level form and hides the desktop)** the application quits without any sort of shutdown notification**. This is a problem because when this happens the application quits and doesn't clean up. In my experience this will kill the application but the EXE will not completely unload and leave behind a hidden running EXE you can see in Task Manager.

For this reason 'closing' the window is not a good idea - you have to Kill() the application to get it truly removed.

What about COM Objects?

COM objects and file based servers are managed completely differently. COM Servers are instantiated as COM objects - they don't have a startup program with a READ EVENTS loop, and there's no way to 'Close' a COM server. You can't even call QUIT from within a COM server to kill it. QUIT has no effect inside of a COM server.

So how do you kill a COM Server:

  • Properly release the reference
  • Use TASKKILL

To properly release a reference of a Web Connection server the way to do this is to use the Administration links. You can find these links on the Admin page and you can also fire those requests using a HTTP client like wwHttp directly from your code.

The easiest way is to look at the links on the Admin page for COM server management:

Web Connection 6.18 introduces the ability to shutdown a specific server by process ID in COM mode. This works at the COM instance manager which looks for the specific instance in the pool, waits to shut it down and then starts a new instance to 'replenish' the pool.

However, realistically it's best to reload the pool. With Web Connection 6.17 we've made major improvements in COM server load times where instances are loaded in parallel and server loading is split out into instance loading and a Load sequence that fires on the first request. This makes it much faster to new servers and a pool reload is actually only as slow as the slowest server instance restart. So - don't be afraid to restart the entire pool of instances via the ReleaseComServers.wc link.

As I often point out - if you're running file based in production with Web Connection, you're missing out on many cool management features that only work in COM, like pool management, auto-recovery on crashes and now the ability to reload individual instances explicitly.

Summary

There are a all sorts of possibilities to manage your Web Connection instances in FoxPro and I've shown here a nice workaround that gets around the annoying issue of shutting down file based instances in development mode. It doesn't solve filebased shutdown in production scenarios at least not completely but it does offer a few more options that allow you to at least be notified of shutdown operations requested.

West Wind Web Connection 6.18 released

$
0
0

Time to ring in the new year with another West Wind Web Connection update and version 6.18. This release is mostly a maintenance release that fixes a few small bugs and updates a few existing features with minor enhancements.

Here's some more detail on what's new.

Apache Support is back

I removed Apache support in Web Connection 6.15 and 6.17 due to some difficulties in getting Web Connection to run on the latest version of Apache that required rewriting the Apache module to work under Apache 2.4. I deemed this to be too much of a hassle as the user base of Apache users is quite small and there had been very little support for this platform in the first place.

Well, it turns that user base is very vocal and hardcore about usage of Apache, which surprised me. Even so I was not inclined to update Apache support until one of our customers offered to sponsor a good chunk of the development time to update Apache support for Apache 2.4. Thanks to this sponsorship I'm happy to announce that Apache support is now back in Web Connection 6.18!

There are also a number of additional enhancements in the Apache support that improve on previous versions.

Specifically the PHYSICAL_PATH and MD_APPL_PATH variables which provide the physical path of both the script file and the application's physical base path. The latter is a concept not really directly supported by Apache which only supports access to the root site path, not support for virtual paths. However, based on some conventions (bin/wc.dll location) it was possible to provide these values.

Make sure you run wc.dll out of the bin folder in Apache to ensure these values work as expected.

Using these enhancements now makes Apache behave a lot more closely to IIS. In fact, it's possible to run Apache now without the custom wwApacheRequest class although I still recommend you use it as it provides a number of fallbacks if stock paths are not found.

Apache Configuration

The Apache configuration for new projects has also been reworked with a simpler to use newer configuration options and provide a more manageable virtual folder and script mapping setup.

JsonString() and JsonDate() function in wwUtils**

Web Connection includes the wwJsonParser class for properly parsing any kind of value or object to JSON. While that object works fine, there are a few simple things that often need to be performed with JSON strings and dates, and for this there are now simple JsonString() and JsonDate() helpers that let you use a one-liner to convert values more easily.

These functions are in addition to the the JsonSerialize() and JsonDeserialize() function also in wwUtils, which were introduced in Web Connection 6.17.

Unload and Reload Individual COM Servers

The .NET Handler now includes the ability to unload a specific COM server based on its process ID. When shut down a server is shutdown in an orderly COM release cycle followed by a Kill cycle if the clean shut down fails. A new instance is then automatically restarted and added back to the pool.

This function is very useful for administrative tooling and monitoring that can perhaps monitor the health of Web Connection processes, and when detecting an abnormality like too much memory use shutting down a specific instance via HTTP request.

Close FoxPro when Web Connection is Running in File Mode

When Web Connection is running inside of Visual Foxpro as a development server, you can't use the Window close button on the FoxPro window, or a Taskbar close click to shut down FoxPro. This is because Web Connection is stuck in a READ EVENTS loop which prevents FoxPro from shutting down.

This update includes logic that uses BINDEVENT() to the Windows WM_CLOSE event and when it detects this messages releases the event loop and explicitly shuts down.

This is also useful for admin scenarios similar to the Unloading of an individual COM server, except that file servers are more difficult to manage. This fix allows for an external application to gracefully shut down a running Web Connection instance. For more info see my recent blog post Shutting down file-based Web Connection Instances with WM_CLOSE Messages. For more info see my recent blog post Shutting down file-based Web Connection Instances with WM_CLOSE Messages.

Summary

There are a few additional small bug fixes but overall this release is a very small one with the exception of the Apache support. Easy does it with this release... it's all that's needed for the moment.

Locking down the West Wind Web Connection Admin Page

$
0
0

The West Wind Web Connection Administration page is a powerful page that is the gateway to administration of the West Wind Web Connection server that is executing. But you know the saying:

With great power, comes great responsibility!

And that's most definitely true for the Admin page.

The Admin page has a very important role, but it's crucially important that this page is completely locked down and not accessible by non-authenticated users.

You don't ever want to come to an old Web Connection Administration page on a live Web site and have it look like this:

If you see this or something similar on a live site, the administration page is wide open for anybody to access and that's a big problem.

The page above comes from an actual site on the Internet and it makes me sad to see this, because you have to go out of your way to make this happen and willfully disable security. Unfortunately, this is not uncommon to see.

I was contacted of the weekend by a security researcher, Ken Pyle of DFDR Consulting LLC, who notified me that he's run into a lot of sites with this problem and he provided a number of links.

And he's not wrong!

To be clear: Pages like this have a very obvious message that tells you what the problem is, namely that the page is not secured, so a lot of this is shoot yourself in the foot syndrome where somebody has willfully ignored the message or worse removed the associated security that gets put in place if you use the Web Connection tooling to configure a Web site.

If you install Web Connection properly either following the manual configuration guide or using the automated tools (yourApp_ConfigureServer.prg or Console Web Site Configuration Wizard in older versions) the installation will be locked down, by removing anonymous access for the IUSR_ and Users account, so that at least a login of some sort is required to get to the admin page.

Changes in 6.19's Admin Page

In Web Connection v6.19 we've made some additional simple changes to the Admin page that make it much harder to accidentally expose the admin interface on a public Web site.

Two changes in particular:

  • Links and Content no longer displayed on unauthenticated remote requests
  • Removed the Process List Viewer display

If you access the page unauthenticated from any non-localhost computer and you are not authenticated you will now see:

If you do authenticate and get in the Process list shown in the previous screen shot is no longer available. Most of that functionality has been available in the Module Administration Page more specifically focused on the running application server instances.

If you can't use or upgrade to Web Connection 6.19, you can download the updated Admin.aspx and old Admin.asp pages from this zip file:

Lock it down!

Regardless of this 'safe by default fix', it's extremely important that you lock down this page by explicitly removing access rights for non-authenticated users.

In this post I'll show you how to do this as a refresher, but I also recommend you look at the documentation for Securing your Web Connection Installation.

What about Web Connection Admin Links

The Admin Page is really mostly a list of links, that points at Web Connection Server operations to manage the server lifetime. These links are also security sensitive obviously.

But Web Connection administration requests like ReleaseComServers.wc or wc.wc?__maintain~ReleaseComServers are already locked down by default via the AdminAccount configuration setting in web.config or wc.ini with by default is set to ANY which means it requires any authenticated and refuses access by non-authenticated users. So these links are locked out by default, although - just like the Admin page - they can be unset and opened up. Don't do it - don't be the shoot-yourself-in-the-foot guy that unsets the setting and forgets to put it back. Always leave at least the base security in place.

Automated Security Configuration

The biggest problem that causes the security issue is that IIS and Windows security isn't set up properly by the servers in question. If you use the Web Connection configuration tooling it automatically does the right thing and has always been doing so.

Web Connection provides tools to help you with site configuration, and these tools do the right thing for security configuration by default. We highly recommend you configure your site using the automated tools provided for this purpose which are:

Using the server configuration script is the recommended way to do this and you can customize this script with any additional configuration your application may need. By default the configuration script is compiled into your Web Connection server EXE and can be accessed with the following from a Windows Admin Prompt:

YourApp.exe CONFIG

For more information:

Remove IUSR Access

While the new Admin page fixes the basic issue of allowing access to the Admin page, it's still important to revoke access to the entire Admin folder for all unauthenticated users.

The easiest way to do this is to remove or Deny access to the IUSR account for the Admin folder in Windows:

Doing this alone will prevent access, but this is an explicit step. The new Admin page addresses the issue if you forget to set security, but it's still strongly recommended you remove IUSR!

Manually Updating Admin.aspx

The 6.19 update to the Admin.aspx does two things:

  • Doesn't allow Remote Access that is unauthenticated
  • Removes the Process Listing Table

Let's do these steps manually.

Disallow Unauthenticated Remote Access

You can replace the section that shows the warning dialog in Admin.aspx with the following updated code that adds an additional remote site check and ends the response if the local and remote ips don't line up.

Here's the relevant code:

<%  
  string user = Request.ServerVariables["AUTH_USER"];
  string remoteIp = Request.ServerVariables["REMOTE_ADDR"];
  string localIp = Request.ServerVariables["LOCAL_ADDR"];           
  if (string.IsNullOrEmpty(user))
  { 
%><div class="alert alert-warning"><i class="fa fa-exclamation-triangle" style="font-size: 1.1em; color: firebrick;"></i><b>Security Warning:</b> You are accessing this request unauthenticated!<div style="border-top: solid 1px silver; padding-top: 5px; margin-top: 5px; "><p>
            You should enable authentication and remove anonymous access to this page or folder.<small><a href="https://west-wind.com/webconnection/docs/_00716R7OG.HTM">more info...</a></small></p><% if(localIp != remoteIp)  { %><p style="color:red; font-weight: bold">
            You are not allowed to access this page without authentication from a remote address.
            Aborting page display...</p><% } else { %><p style="color:red; font-weight: bold">
            NOTE: You are allowed to access this page, because you are accessing it from the
            local machine, but it won't work from a remote machine.</p>   <% } %></div></div><% 
    if(localIp != remoteIp)
    {
        Response.End();
    }            
} 
%>

Remove the Process List Table

The Machine Process List is a relic of earlier versions of Web Connection when the management features were less fleshed out. Today's Web Connection can perform these tasks much cleaner using the Module Administration page.

Remove the Process List table and edit form from Admin.asxp (and in a similar way in Admin.asp).

Remove the following:

<div class="well well-sm"><form action='' method='POST'>
            Exe file starts with: <input type='text' id='exeprefix' name='exeprefix' value='<%= this.Show %>' class="input-sm" /><button type='submit' class="btn btn-default btn-sm"><i class="fa fa-refresh"></i>
                Refresh</button></form></div><table class="table table-condensed table-responsive table-striped" ><tr><th>Process Id</th><th>Process Name</th><th>Working Set</th><th>Action</th></tr><%      
            System.Diagnostics.Process[] processes = this.GetProcesses();
            foreach (System.Diagnostics.Process process in processes)
            {
        %><tr><td><%= process.Id%></td><td><%= process.ProcessName%></td><td><%= (process.WorkingSet / 1000000.00).ToString("n1") %> mb</td><td><a href="admin.aspx?ProcessId=<%= process.Id %>" class="hoverbutton"><i class="fa fa-remove" style="color: firebrick;"></i> 
                Kill</td></tr><%
}
        %>            </table>

Also remove the block of script code on the bottom of the Admin.aspx page, which was used for helper purposes to the process list table above.

Again you can find the latest versions of these files in Web Connection 6.19 or you can download the updated Admin pages:

Update to the Latest Version of Web Connection

If you're already on Web Connection 6.0 I highly recommend you update a version of 6.19 or later and copy the Admin.aspx page from the \templates\ProjectTemplate\Web\Admin folder into your Web Application(s).

I can't overstate this: Even if you have an application that's been running for a long time, it's a good idea to keep up with versions in order to take advantage of security updates and bug fixes. There are many feature improvements in newer version, but being current also means it's much easier to update to later versions. Web Connection's core engine hasn't drastically changed since Version 5.0 more than 10 years ago, so updates are almost always drop in replacements - there are only a handful of documented breaking changes.

I realize there are a lot of very old (I ran into several 3.x applications that are 20+ years old by now), but if you have old applications running you need to be pro-active and make sure that they are still doing what they should and that they are secure. Making this sort of jump to the current version is probably unrealistic, but if you're running a recent version of WWWC 5 at least updating to 6.x is relatively minor. Moving from 4 to 6 is a little more involved but still can be accomplished relatively easily with a little effort. If you decide to upgrade Web Connection from a previous prior to v6.0, here is a little incentive with a 10% discount coupon:

Being on a recent version provides the ability to update to the most recent versions makes it much easier to keep up with changes and fixes, and you can use the changelog to see what's updating and what's being fixed with important and breaking changes highlighted.

Nevertheless, I'll discuss the fix below, so if you're using a pre-6.x version of Web Connection you can manually update your Admin.aspx and the pre-6.0 Admin.asp pages.

Resources

I want to also thank Ken Pyle for bringing this issue to my renewed attention and providing the motivation for updating the default implementation to reject unauthenticated access from remote sources by default.

Ken Pyle
DFDR Consulting LLC
Digital Forensics, Incident Response, Cyber Security
www.dfdrconsulting.com

Links

this post created and published with Markdown Monster

Web Connection and TLS 1.2 Support

$
0
0

On February 28th, 2018 Authorize.NET discontinued support for all TLS 1.0 and 1.1 access to their APIs and only supports TLS 1.2. Not surprisingly the last few days my phone (Skype actually) has been ringing off the hook with customers frantically looking to fix a variety of TLS 1.2 issues.

TLS is the successor to the SSL protocol. TLS 1.0 and 1.1 both had theoretically possible (but very computing intensive) breaches associated with them and so they are considered compromised, and most industry groups that rely on secure standards (like Credit Card Processors obviously) now require TLS 1.2. This requirement has been long announced by many companies - years ago, but the cut off dates are now reaching maturity and Authorize.NET was just one of them that many of the customers I work were directly affected by.

Protocol Support in Windows and what it means

Before we look at what supports which version etc. it's important to understand what 'TLS Support' means in this context. The context here really is Windows Support meaning that Window's internal libraries and SDKs that use the underlying Windows Security Protocol infrastucture.

This affects most Microsoft Products that run on Windows, as well as the Windows Client stack including WinInet and WinHttp which incidentally is also used by the desktop .NET framework as well as Web Connection and West Wind Client Tools use for the its HTTP support.

Non-Microsoft Browsers bypass Windows Security Protocol Libraries

Although TLS 1.2 may not be supported by some older versions of Windows and APIs, you can still use a number of applications that have their own TLS implementations. Specifically any non-Microsoft Web Browsers like Chrome and FireFox do not use the Windows infrastructure so even on old versions of Windows you can get TLS 1.2 support using custom browsers. This works for browsers, but many other applications that run on old Windows versions do rely on the Windows infrastructure and those may stop working if they require TLS 1.2.

Windows and TLS 1.2

TLS 1.2 is relatively new and old versions of Windows didn't have native support for TLS 1.2. The first versions of Windows that natively support TLS 1.2 are:

  • Windows 8.1 or later
  • Windows Server 2012 R2 or later

Windows Versions that can work with TLS 1.2 with some tweaks and patches:

  • Windows 8
  • Windows 7
  • Windows Server 2012, 2008 R2

These versions can support TLS 1.2 via registry settings (for machine wide settings) or Internet Explorer Configuration of Protocols for IE and WinHttp/WinInet functionality.

Older versions of Windows don't have support TLS 1.2 at all and don't have any way to make it work outside of using applications that don't use the Windows Protocols Stack:

  • Windows Vista
  • Windows Xp
  • Windows Server 2003
  • Windows Server 2008

These versions all don't support TLS 1.2. This also affects the wwHttp class which runs through WinInet in Web Connection or the Client Tools.

Checking TLS 1.2 support

If you're running a version that either supports or can be upgraded to support TLS 1.2, you should be able to check TLS 1.2 in Windows with Internet Explorer. If you know of a site that requires TLS 1.2 to access, you can visit that site and if TLS 1.2 support is not working you will not be able to access the site/page with anything that uses WinHttp/WinInet which includes the West Wind FoxPro HTTP tools.

In essence you can use IE as you canary in a coal mine to see if you'll have problems and need to fix TLS 1.2 settings.

One such TLS 1.2 test link is:

https://tls1test.salesforce.com/s/

The page returns a Green page with a success message.

You can also hit this page with an HTTP client. Using wwHttp in Web Connection or the Client Tools you can do:

DO wwHttp   
o = CREATEOBJECT("wwHttp")
? o.HttpGet("https://tls1test.salesforce.com/s/")
? o.cErrorMsg    && Connection could not be established if TLS 1.2 is not available

You can check the HTML output in the page for TLS 1.0 Deactivation Test Passed in the output to confirm that the page worked.

If the test fails you have some work to do.

Making TLS 1.2 work in older Windows Machines

There are two places where TLS 1.2 settings can be made:

  • Internet Explorer Settings (affects IE and WinInet/WinHttp)
  • Registry Protocols Section

Internet Explorer Settings

The first and easiest way to fix settings for client libraries that are using an HTTP client using WinInet or WinHttp like wwHttp is by setting the Internet Explorer Protocols selection in the IE options.

In IE open Internet Settings -> Advanced -> Security and scroll to the bottom of the list. There you can find the protocol support you want to allow.

This enables only TLS 1.2 support and disables all the other protocols which is the recommended approach. However, be careful with this - you may run into problems with older Web Sites that may not yet have updated to TLS 1.2 certificates. These days all new certificates issued are TLS 1.2 but some sites may still have older or client signed certificates that are not and that can break things.

Note: PCI compliance requires that TLS 1.1 and 1.0 are to be turned off!

Protocol Registry Settings

There are also a set of Registry Settings that let you enable and disable the various protcols for client and server explicitly.

You can find these in:

HKLM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols

There sub-keys for each protocol and subkeys for Client and Server. Client affects client libraries like Internet Explorer and WinInet, WinHTTP and by extension .NET and Web Connection. Server affects incoming Server connections so it affects server software like SQL Server, Exchange, IIS etc.

The values are:

DisabledByDefault: 0 or 1
Enabled: 0 or ffffffff

and you can apply them on any of the Client and Server protocol keys.

Be Careful Removing Protocol Support

Removing older protocols may seem secure, but it may bite you in unexpected ways. One customer I worked with removed all non-TLS 1.2 protocols and was then unable to connect to SQL server until a post SP patch was installed because SQL Servers client connectivity wasn't able to connect via TLS 1.2. Get it working first, then dial back and test! Going all TLS 1.2 may require updating many old components to latest version to ensure connectivity still works.

Keep Things Up To Date

I've been harping on keeping software up to date recently and this is yet another reason why it's important to not neglect upgrades and keep on running with ancient versions of OS's. This goes for Operating Systems as well as critical server and client software. Keeping patched is extremely important and staying somewhat current with OS versions (no more than 2 OS releases behind is my rule) is critical to not be caught by surprise by something critical simply not being available.

The same is true for server software - like Web Connection. We have a lot of customers that never went past version 4 which is now nearly 20 years old. That's a long time to be running an application and not be broken.

This TLS 1.2 issue may be one of those things as really old OSs don't have an easy work around.

Be diligent with your applications. This doesn't mean being on the upgrade treadmill constantly but have a lifetime management for your server hardware and software and stick to it when time is up. I see 4-5 years as the sweet spot for upgrading OS versions.

These days this is even easier with cloud based VM and service solutions that are much quicker to deploy and restore to.

Summary

There you have it - I thought I'd write down these notes since this issue has been a recurring theme that's affecting quite a few customers. Hopefully you find something useful here to help you make sure your applications are TLS 1.2 capable.

Resources

this post created and published with Markdown Monster

Creating an ASP.NET Core Markdown TagHelper and Parser

$
0
0

A few months ago I wrote about creating a literal Markdown Control for WebForms, where I described a simple content control that takes the content from within a tag and parses the embedded Markdown and then produces HTML output in its stead. I created a WebForms control mainly for selfish reasons, because I have tons of semi-static content on my content sites that still live in classic ASP.NET ASPX pages.

Since I wrote that article I've gotten a lot of requests to write about an ASP.NET Core version for something similar and - back to my own selfishishness - I'm also starting to deploy a few content heavy sites that have mostly static html content that would be well served by Markdown using ASP.NET Core and Razor Pages. So it's time to build an ASP.NET Core version by creating a <markdown> TagHelper.

There are already a number of implementations available, but I'm a big fan of the MarkDig Markdown Parser, so I set out to create an ASP.NET Core Tag Helper that provides the same functionality as the WebForms control I previously created.

Using the TagHelper you can render Markdown like this inside of a Razor Page:

<markdown>
    #### This is Markdown text inside of a Markdown block

    * Item 1
    * Item 2
 
    ### Dynamic Data is supported:
    The current Time is: @DateTime.Now.ToString("HH:mm:ss")

    ```cs
    // this c# is a code block
    for (int i = 0; i < lines.Length; i++)
    {
        line1 = lines[i];
        if (!string.IsNullOrEmpty(line1))
            break;
    }
    ```</markdown>

The Markdown is expanded into HTML to replace the markdown TagHelper content.

You can also easily parse Markdown both in code and inside of Razor Pages:

string html = Markdown.Parse(markdownText)

Inside of Razor code you can do:

<div>@Markdown.ParseHtmlString(Model.ProductInfoMarkdown)</div>

##AD##

Get it

The packaged component includes the TagHelper and a simple way to parse Markdown in code or inside of a Razor Page.

It's available as a NuGet Package:

PM> Install-Package Westwind.AspNetCore.Markdown

And you can take a look at the source code on Github:

Why do I need a Markdown Control?

Let's take a step back - why would you even need a content control for Markdown Parsing?

Markdown is everywhere these days and I for one have become incredibly dependent on it for a variety of text scenarios. I use it for blogging, for documentation both for code on Git repos and actual extended documentation. I use it for note keeping and collaboration in Gists or Github Repos, as well as a data entry format for many applications that need to display text content a little bit more richly than using plain text. Since I created the Markdown control I've also been using that extensively for quite a bit of my static content and it's made it much easier to manage some of my content this way.

What does it do?

The main reason for this component is the ability to embed Markdown into content with a simple tag that gets parsed into HTML at runtime. This is very useful for content pages that contain a lot of raw static text. It's a lot easier to write Markdown text in content pages than it is to write HTML tag soup consisting of <p>,<ul> and <h3> tags. Markdown is a heck of a lot more comfortable to type and maintain and this works well for common text content. It won't replace HTML for markup for an entire page, but it can be a great help with large content blocks inside of a larger HTML page.

In this post I'll create <markdown> TagHelper that can convert inline Markdown like this:

<h3>Markdown Tag Helper Block</h3><markdown>
    #### This is Markdown text inside of a Markdown block

    * Item 1
    * Item 2
 
    ### Dynamic Data is supported:
    The current Time is: @DateTime.Now.ToString("HH:mm:ss")

    ```cs
    // this c# is a code block
    for (int i = 0; i < lines.Length; i++)
    {
        line1 = lines[i];
        if (!string.IsNullOrEmpty(line1))
            break;
    }
    ```</markdown>

The content of the control is rendered to HTML at runtime which looks like this:

The above renders with default Bootstrap styling of an ASP.NET Core MVC default Web site plus hilightjs for the code highlighting.

It's important to understand that rendered Markdown is just HTML there's nothing in Markdown that handles styling of the content - that's left up to the host site or tool that displays the final HTML output. Any formatting comes from the host application, in this case the stock ASP.NET Core template for sample purposes.

Using this control allows you to easily create content areas inside of HTML documents that are rendered from Markdown. You write Markdown, the control renders HTML at runtime.

As part of this component I'll also provide a simple way to parse Markdown in code and inside of @RazorPages.

Creating a Markdown TagHelper

Before we dive in let's briefly discuss what TagHelpers are for those of you new to ASP.NET Core and then look at what it takes to create one.

What is a TagHelper?

TagHelpers are a new feature for ASP.NET Core MVC, and it's easily one of the nicest improvements for server side HTML generation. TagHelpers are self contained components that are embedded into a @Razor page. TagHelpers look like HTML tags and unlike Razor expressions (@Expression) feel natural inside of standard HTML content in a Razor page.

Many of the existing Model binding and HTML helpers in ASP.NET have been replaced by TagHelpers and TagHelper behaviors that allow you to directly bind to HTML controls in a page. For example, here is an Input tag bound to a model value.

For example:

<input type="email" asp-for="Email" 
       placeholder="Your email address"
       class="form-control"/>

where asp-for extends the input element with an extension attribute to provide the model binding to the value property. This replaces:

@Html.TextBoxFor(model => model.Email, 
                 new { @class = "form-control",
                      placeholder = "your email address", 
                      type = "email" })

Which would you rather use? ?? TagHelpers make it easier to write your HTML markup by sticking to standard HTML syntax which feels more natural than using Razor expressions.

Make your own TagHelpers

Another important point is that it's very easy to create your own TagHelpers which is the focus of this post. The interface to create a TagHelper is primarily a single method interface that takes a Context input to get element, tag and content information and an output string that has to be generated of for the actual TagHelper output. Using this approach feels very natural and makes it easy to create your own tag helpers with minimal fuss.

A TagHelper encapsulates rendering logic via a very simple ProcessAsync() interface that renders a chunk of HTML content into the page at the location the TagHelper is defined. The ProcessAsync() method takes a TagHelper Context as input to let you get at the element and attributes for input, and provides an output that you can write string output to generate your embedded content. As we'll see it takes very little code to create a very useful TagHelper.

In order to use TagHelpers they have to be registered with MVC, either in the page or more likely in the _ViewImports.cshtml page of the project.

To create a Tag Helper these are the things you typically need to do:

  • Create a new Class and Inherit from TagHelper
  • Create your TagHelper implementation via ProcessAsync() or Process().
  • Register your TagHelper in _ViewImports.cshtml
  • Reference your TagHelper in your pages
  • Rock on!

##AD##

Creating the MarkdownTagHelper Class

For the <markdown> TagHelper I want to create a content control whose content can be retrieved and parsed as Markdown and then converted into HTML. Optionally you can also use a Markdown property to bind Markdown for rendering - so if you have Markdown as part of data in your model you can bind it to this property/attribute in lieu of static content you provide.

Here's the base code for the MarkdownTagHelper that accomplishes these tasks:

[HtmlTargetElement("markdown")]
public class MarkdownTagHelper : TagHelper
{
    [HtmlAttributeName("normalize-whitespace")]
    public bool NormalizeWhitespace { get; set; } = true;

    [HtmlAttributeName("markdown")]
    public ModelExpression Markdown { get; set; }

    public override async Task ProcessAsync(TagHelperContext context, TagHelperOutput output)
    {
        await base.ProcessAsync(context, output);

        string content = null;
        if (Markdown != null)
            content = Markdown.Model?.ToString();

        if (content == null)            
            content = (await output.GetChildContentAsync()).GetContent();

        if (string.IsNullOrEmpty(content))
            return;

        content = content.Trim('\n', '\r');

        string markdown = NormalizeWhiteSpaceText(content);            

        var parser = MarkdownParserFactory.GetParser();
        var html = parser.Parse(markdown);

        output.TagName = null;  // Remove the <markdown> element
        output.Content.SetHtmlContent(html);
    }

}

Before you can use the TagHelper in a page you'll need to register it with the MVC application by sticking the following into the _ViewImports.cshtml:

@addTagHelper *, Westwind.AspNetCore.Markdown

Now you're ready to use the TagHelper:

<markdown>This is **Markdown Text**. Render me!</markdown>

As you can see the code to handle the actual processing of the markdown is very short and easy to understand. It grabs either the content of the <markdown> element or the markdown attribute and then passes that to the the Markdown Parser to process. The parser turns the Markdown text into HTML which is the written out as HTML content using output.Content.SetHtmlContent().

The code uses an abstraction for the Markdown Parser so the parser can be more easily replaced in the future without affecting the TagHelper code. I've gone through a few iterations of Markdown Parsers before landing on MarkDig, and I use this code in many places where I add Markdown parsing. I'll come back to the Markdown Parser in a minute.

Normalizing Markdown Text

One issue with using a TagHelper or Control for Markdown is that Markdown expects no margins in the Markdown text to process.

If you have Markdown like this:

<markdown>
    #### This is Markdown text inside of a Markdown block

    * Item 1
    * Item 2
 
    ### Dynamic Data is supported:
    The current Time is: @DateTime.Now.ToString("HH:mm:ss")

    ```cs
    // this c# is a code block
    for (int i = 0; i < lines.Length; i++)
    {
        line1 = lines[i];
        if (!string.IsNullOrEmpty(line1))
            break;
    }
    ```</markdown>

and leave this Markdown in its raw form with the indent, the Markdown parser would render the entire Markdown text as a code block, because the text is indented with 4 spaces which is constitutes a code block in Markdown. Not what we want here!

This is where the NormalizeWhiteSpace property comes into play. This flag, which is true by default, determines whether leading repeated white space is stripped from the embedded Markdown block.

Here's the code to strip leading white space:

string NormalizeWhiteSpaceText(string text)
{
    if (!NormalizeWhitespace || string.IsNullOrEmpty(text))
        return text;

    var lines = GetLines(text);
    if (lines.Length < 1)
        return text;

    string line1 = null;

    // find first non-empty line
    for (int i = 0; i < lines.Length; i++)
    {
        line1 = lines[i];
        if (!string.IsNullOrEmpty(line1))
            break;
    }

    if (string.IsNullOrEmpty(line1))
        return text;

    string trimLine = line1.TrimStart();
    int whitespaceCount = line1.Length - trimLine.Length;
    if (whitespaceCount == 0)
        return text;

    StringBuilder sb = new StringBuilder();
    for (int i = 0; i < lines.Length; i++)
    {
        if (lines[i].Length > whitespaceCount)
            sb.AppendLine(lines[i].Substring(whitespaceCount));
        else
            sb.AppendLine(lines[i]);
    }

    return sb.ToString();
}

string[] GetLines(string s, int maxLines = 0)
{
    if (s == null)
        return null;

    s = s.Replace("\r\n", "\n");

    if (maxLines < 1)
        return s.Split(new char[] { '\n' });

    return s.Split(new char[] { '\n' }).Take(maxLines).ToArray();
}

This code works by looking at the first non-empty line and checking for leading White space. It captures this white space and then removes that same leading whitespace from all lines of the content. This works as long as the Markdown Block uses consistent white space for all lines (ie. all tabs or all n spaces etc.).

If normalize-whitespace="false" in the document, you can still use the TagHelper but you have to ensure the that text is left justified in the saved razor file. This is hard if you're using Visual Studio as it'll try to reformat the doc and re-introduce the whitespace, so the default for this attribute is true.

To look at the complete code for this class you can check the code on Github:

Razor Expressions in Markdown

If you look back at the Markdown example above you might have noticed that the embedded Markdown includes a @Razor expression inside of the <markdown> tag.

The following works as you would expect:

<markdown>
The current Time is: **@DateTime.Now.ToString("HH:mm:ss")**</markdown>

Razor processes the expression before it passes the content to the TagHelper, so in this example the date is already expanded when the Markdown parsing is fired.

This is pretty cool - you can essentially use most of Razor's features in place. Just make sure that you generate Markdown compatible text from your Razor expressions and code.

##AD##

Markdown Parsing with Markdig

The TagHelper above relies on a customized MarkdownParser implentation. As mentioned this component uses the MarkDig Markdown parser, but I added some abstraction around the Markdown Parser as I've switched parsers frequently in the past before settling pretty solidly on MarkDig.

Parsing Markdown with Markdig is pretty simple, and if you want to be quick about it, you can easily create a function that does the following to parse Markdown using MarkDig:

public static class Markdown
{
    public static string Parse(string markdown) 
    {
        var pipeline = new MarkdownPipelineBuilder()
                             .UseAdvancedExtensions()
                             .Build();
        return Markdown.ToHtml(markdown, pipeline);
    }
}        

MarkDig uses a configuration pipeline of support features that you can add on top of the base parser. The example above adds a number of common extensions (like Github Flavored Markdown, List Extensions etc.), but you can also add each of the components you want to customize exactly how you want Markdown to be parsed.

The code above is not super efficient as the pipeline needs to be recreated for each parse operation and that's part of the reason that I built a small abstraction layer around the Markdown parser so the parser can be easily switched without affecting the rest of the application and so that the generated Pipeline can be cached for better performance.

A MarkdownParserFactory

The first thing is a Markdown Parser factory that provides an IMarkdownParser interface which has little more than that a Parse() method:

public interface IMarkdownParser
{
    string Parse(string markdown);
}

The Factory then produces the Interface with at this point a hardcoded implementation for MarkDig in place. The factory also caches the Parser instance so it can be reused without reloading the entire parsing pipeline on each parse operation:

/// <summary>
/// Retrieves an instance of a markdown parser
/// </summary>
public static class MarkdownParserFactory
{
    /// <summary>
    /// Use a cached instance of the Markdown Parser to keep alive
    /// </summary>
    static IMarkdownParser CurrentParser;

    /// <summary>
    /// Retrieves a cached instance of the markdown parser
    /// </summary>                
    /// <param name="forceLoad">Forces the parser to be reloaded - otherwise previously loaded instance is used</param>
    /// <param name="usePragmaLines">If true adds pragma line ids into the document that the editor can sync to</param>
    /// <returns>Mardown Parser Interface</returns>
    public static IMarkdownParser GetParser(bool usePragmaLines = false,
                                            bool forceLoad = false)                                                
    {
        if (!forceLoad && CurrentParser != null)
            return CurrentParser;
        CurrentParser = new MarkdownParserMarkdig(usePragmaLines, forceLoad);

        return CurrentParser;
    }
}

Finally there's the actual MarkDigMarkdownParser implementation that's responsible for handling the actual configuration of the parser pipeline and parsing the Markdown to HTML. The class inherits from a MarkdownParserBase class that provides a few optional pre and post processing features such as font awesome font-embedding, yaml stripping (which is not built into MarkDig but not other parsers) etc.

/// <summary>
/// Wrapper around the MarkDig parser that provides a cached
/// instance of the Markdown parser. Hooks up custom processing.
/// </summary>
public class  MarkdownParserMarkdig : MarkdownParserBase
{
    public static MarkdownPipeline Pipeline;

    private readonly bool _usePragmaLines;

    public MarkdownParserMarkdig(bool usePragmaLines = false, bool force = false, Action<MarkdownPipelineBuilder> markdigConfiguration = null)
    {
        _usePragmaLines = usePragmaLines;
        if (force || Pipeline == null)
        {                
            var builder = CreatePipelineBuilder(markdigConfiguration);                
            Pipeline = builder.Build();
        }
    }

    /// <summary>
    /// Parses the actual markdown down to html
    /// </summary>
    /// <param name="markdown"></param>
    /// <returns></returns>        
    public override string Parse(string markdown)
    {
        if (string.IsNullOrEmpty(markdown))
            return string.Empty;

        var htmlWriter = new StringWriter();
        var renderer = CreateRenderer(htmlWriter);

        Markdig.Markdown.Convert(markdown, renderer, Pipeline);

        var html = htmlWriter.ToString();
        
        html = ParseFontAwesomeIcons(html);

        //if (!mmApp.Configuration.MarkdownOptions.AllowRenderScriptTags)
        html = ParseScript(html);  
                  
        return html;
    }

    public virtual MarkdownPipelineBuilder CreatePipelineBuilder(Action<MarkdownPipelineBuilder> markdigConfiguration)
    {
        MarkdownPipelineBuilder builder = null;

        // build it explicitly
        if (markdigConfiguration == null)
        {
            builder = new MarkdownPipelineBuilder()                    
                .UseEmphasisExtras(Markdig.Extensions.EmphasisExtras.EmphasisExtraOptions.Default)
                .UsePipeTables()
                .UseGridTables()
                .UseFooters()
                .UseFootnotes()
                .UseCitations();


            builder = builder.UseAutoLinks();        // URLs are parsed into anchors
            builder = builder.UseAutoIdentifiers();  // Headers get id="name" 

            builder = builder.UseAbbreviations();
            builder = builder.UseYamlFrontMatter();
            builder = builder.UseEmojiAndSmiley(true);
            builder = builder.UseMediaLinks();
            builder = builder.UseListExtras();
            builder = builder.UseFigures();
            builder = builder.UseTaskLists();
            //builder = builder.UseSmartyPants();            

            if (_usePragmaLines)
                builder = builder.UsePragmaLines();

            return builder;
        }
        
        // let the passed in action configure the builder
        builder = new MarkdownPipelineBuilder();
        markdigConfiguration.Invoke(builder);

        if (_usePragmaLines)
            builder = builder.UsePragmaLines();

        return builder;
    }

    protected virtual IMarkdownRenderer CreateRenderer(TextWriter writer)
    {
        return new HtmlRenderer(writer);
    }
}

The key bit about this class is that it can be used to configure how the Markdown Parser renders to HTML.

That's a bit of setup, but once it's all done you can now do:

var parser = MarkdownParserFactory.GetParser();
var html = parser.Parse(markdown);

and that's what the Markdown TagHelper uses to get a cached MarkdownParser instance for processing.

Standalone Markdown Processing

In addition to the TagHelper there's also a static class that lets you easily process Markdown in code or inside of a RazorPage, using a static Markdown class:

public static class Markdown
{
    /// <summary>
    /// Renders raw markdown from string to HTML
    /// </summary>
    /// <param name="markdown"></param>
    /// <param name="usePragmaLines"></param>
    /// <param name="forceReload"></param>
    /// <returns></returns>
    public static string Parse(string markdown, bool usePragmaLines = false, bool forceReload = false)
    {
        if (string.IsNullOrEmpty(markdown))
            return "";

        var parser = MarkdownParserFactory.GetParser(usePragmaLines, forceReload);
        return parser.Parse(markdown);
    }

    /// <summary>
    /// Renders raw Markdown from string to HTML.
    /// </summary>
    /// <param name="markdown"></param>
    /// <param name="usePragmaLines"></param>
    /// <param name="forceReload"></param>
    /// <returns></returns>
    public static HtmlString ParseHtmlString(string markdown, bool usePragmaLines = false, bool forceReload = false)
    {
        return new HtmlString(Parse(markdown, usePragmaLines, forceReload));
    }
}

In code you can now do:

string html = Markdown.Parse(markdownText)

Inside of Razor code you can do:

<div>@Markdown.ParseHtmlString(Model.ProductInfoMarkdown)</div>

Summary

As with the WebForms control none of this is anything very new, but I find that this is such a common use case that it's worth to have a reusable and easily accessible component for this sort of functionality. With a small Nuget package it's easy to add Markdown support both for content embedding as well as simple parsing.

As Markdown is getting ever more ubiquitous, most applications can benefit from including some Markdown features. For content sites especially Markdown can be a good fit for creating the actual text content inside of pages and the <markdown> control discussed here actually makes that very easy.

I was recently helping my girlfriend set up a landing page for her Web site and using Markdown I was able to actually set up a few content blocks in the page and let her loose on editing her own content easily. No way that would have worked with raw HTML.

Enjoy...

Resources

this post created and published with Markdown Monster

Persisting Static Objects in Web Connection Applications

$
0
0

Persisting Objects in Time in Web Connection

Web Connection Server applications are in essence FoxPro applications that are loaded once and stay in memory. This means they have state that sticks around for the lifetime of the application. Persistance in time…

Global State: The wwServer object

In Web Connection the top level object that always sticks around and is in effect the global object, is the wwServer instance. Any property/object that is attached to this instance, by extension then also becomes global and is effectively around for the lifetime of the application.

What this means is that you can attach properties or resources to your wwServer instance easily and create cached instances of objects and values that are accessible via the Server private variable anywhere in your Web Connection connection code.

This is useful for resource hungry components that take a while to spin up, or for cached resources like large look up tables or collections/arrays of values that you repeatedly need but maybe don't want to reload on each hit.

Attaching Application State to wwServer

There are a number of ways to attach custom values to the global wwServer instance:

  • Add a Property to your Server Instance
  • Use Server.oResource.Add(key,value)
  • Use Server.oResource.AddProperty(propname,value)

Adding Properties to wwServer Explicitly

You can explicitly add properties to your wwServer instance. Your custom wwServer instance is in MyAppMain.prg (Replace MyApp with whatever your appname is) and in it is a definition for a server instance:

DEFINE CLASS MyAppServer as wwServer OLEPUBLIC

oCustomProperty = null

PROTECTED FUNCTION OnInit

this.oCustomProperty = CREATEOBJECT("MyCachedObjectClass")
...
ENDFUNC

ENDDEFINE

The oCustomProperty value or object is loaded once on startup and then persists for the duration of the Web Connection server application.

You can then access this property from anywhere in a Process class as:

loCustom = Server.oCustomProperty

And voila you have a new property that exists on the server instance and is always persisted.

COM Interfaces vs new Server Properties

One problem with this approach is that the new property causes a COM Interface change to the COM server that is gets registered when Web Connection runs as a COM server. Whenever the COM interface signature changes, the COM object needs to be explicitly re-registered or else the server might not instantiate under COM.

So, as a general rule it's not a good idea to frequently add new properties to your server instance.

One way to mitigate this is to create one property that acts as a container for any persisted objects and then use that object to hang off any other objects:

DEFINE CLASS ObjectContainer as Custom
   oCustomObject1 = null
   oCustomObject2 = null
   oCustomObject3 = null
ENDDEFINE

Then define this on your wwServer class:

DEFINE CLASS MyAppServer as wwServer OLEPUBLIC

oObjectContainer = null

PROTECTED FUNCTION OnInit

this.oObjectContainer = CREATEOBJECT("ObjectContainer")
...
ENDFUNC

ENDDEFINE

You can then hang any number of sub properties off this object and still access them with:

loCustom1 = Server.oObjectContainer.oCustomObject1
loCustom.DoSomething()

The advantage of this approach is that you get to create an explicit object contract by way of a class you implement that clearly describes the structure of the objects you are ‘caching’ in this way.

For COM this introduces a single property that is exposed in the external COM Interface registered - adding additional objects to the container has no impact on the COM Interface exposed to Windows and so no COM re-registration is required.

Using oResources

The Web Connection Server class includes an oResources object property that provides a generic version of what I described in the previous section. Rather than a custom object you create, a pre-created object exists on the server object and you can hang off your persistable objects off that instance.

You can use:

  • AddProperty(propname,value) to create a dynamic runtime property
  • Add(key,value) to use a keyed collection value

.AddProperty() like the name suggests dynamically adds a property to the .oResources instance:

PROTECTED FUNCTION OnInit

this.oResources.AddProperty("oCustom1", CREATEOBJECT("CustomClass1"))
this.oResources.AddProperty("oCustom2", CREATEOBJECT("CustomClass2"))
...
ENDFUNC

You can then use these custom properties like this:

loCustom1 = Server.oResources.oCustom1

The behavior is the same as the explicit object described earlier, except that there is no explicit object that describes the custom property interface. Rather the properties are dynamically added at runtime.

Using .Add() works similar, but doesn't add properties - instead it simply uses collection values.

PROTECTED FUNCTION OnInit

this.oResources.Add("oCustom1", CREATEOBJECT("CustomClass1"))
this.oResources.Add("oCustom2", CREATEOBJECT("CustomClass2"))
...
ENDFUNC

This creates collection entries that you retrieve with:

loCustom1 = Server.oResources.Item("oCustom1")
loCustom2 = Server.oResources.Item("oCustom2")

This latter approach works best with truly dynamic resources that you want to add and remove conditionally. Internally wwServer::oResources method uses a wwNameValueCollection so you can add and remove and update resources stored in the collection quite easily.

Persistance of Time

One of the advantages of Web Connection over typical ASP.NET multi-threaded COM servers applications in ASP.NET where COM servers are reloaded on every hit, is that Web Connection does have state and the application stays alive between hits. This state allows the FoxPro instance to cache data internally - so data buffers and memory as well as property state can be cached.

You can also leave cursors open and re-use them in subsequent requests. And as I've shown in this post, you can also maintain object state by caching it on the wwServer instance. This sort of ‘caching’ is simply not possible if you have COM servers getting constantly created and re-created.

All this adds to a lot of flexibility on how manage state in Web Connection applications. But you also need to be aware of your memory usage. You don't want to go overboard with cached data - FoxPro itself is very good at maintaining internal data buffers, especially if you give it lots of memory to run in.

Be selective in your ‘caching’ of data and state and resort to caching/persisting read-only or read-rarely data only. No need to put memory strain on the application by saving too much cached data. IOW, be smart in what you cache.

Regardless, between Web Connection's explicit caching and FoxPro's smart buffering and memory usage (as long as you properly constrain it) you have a lot of options on how to optimize your data intensive operations and data access.

Now get too. Time's a wastin'…

this post created with Markdown Monster

Startup Error Tracing in West Wind Web Conection 6

$
0
0

Web Connection 6.0 and later has made it much easier to create a predictable and repeatable installation with Web Connection. It's now possible to create a new project and use the built-in configuration features to quickly and reliably configure your application on a server with yourApp.exe CONFIG from the command line.

This produces a well-known configuration that creates virtuals, scriptmaps and sets common permissions on folders. The configuration is a PRG file that you can customize so if you need configure additional folders, set different permissions or copy files around as part of config - you can do that.

Using the preconfigured configuration should in most cases just make your servers work.

But we live in an imperfect world and things do go bump in the night - and so it can still happen that your Web Connection server won't start up. There are many reasons that this can happen from botched permissions on folders or DCOM to startup errors.

In this post I want to talk about server startup problems - specifically FoxPro code startup errors (rather than system startup errors due to permissions/configuration etc.).

One of the most common problems people run into with Web Connection application startup errors. You build your application on your development machine, then deploy it on a live server and boom - something goes wrong and the server doesn't start.

Now what?

File Server Startup Errors

Even if you're running in COM mode if you have startup problems with your COM server it's often a good idea to first switch the COM server into File mode, then run as a file mode applications.

If you are running in file mode it's often easier to find startup problems, because you tend to run the application in interactive mode which means you get to see errors pop up in the FoxPro Window.

If you run into issues here you can also double check your development code to see if you can duplicate the behavior. Make sure your local application works obviously, but that's the first line of defense: Make sure the error is indeed specific to your server's environment. If it's not by all means debug it locally and not on the server.

Test Locally First

This should be obvious: But always, always run your server locally first, with an environment as close as possible to what you are running on the server. Run in file mode make sure that works. Run in COM Mode make sure that works. Simulate the user environment you will use on the server locally (if possible) and see what happens.

Always make sure the app runs locally first because it's a heck of a lot easier to debug code on the development machine where you can step through code, than on a server where you usually cannot.

COM Server Startup Errors

Startup errors tend to fall into two categories:

  • System Startup Errors
  • FoxPro Server Startup Errors

System errors are permissions, invalid progIds, DCOM misconfigurations etc. that outright make your server fail before it ever gets a chance to be instantiated. These are thorny issues, but I'm not going to cover them much here. That'll be topic for another post.

The other area is server startup errors in FoxPro code. These errors occur once the server has been instantiated and initialized and usually occur during the load phase of the server.

Understanding the Startup Sequence: Separated OnInit and OnLoad Sequence

When Web Connection starts up your application in a non-debug compiled EXE COM server, error handling is not initially available as the server initializes. That's because the server initializes as part of an OnInit() sequence and if that fails, well... your server will never actually become live.

In Web Connection 6+ the startup sequence has been minimized with a shortened OnInit() cycle, and a delayed OnLoad handler call that fires only on the first hit to your server. This reduces the potential failure scenarios that can occur if your server fails before it is fully instantiated. Errors can still occur but they are now a little bit easier to debug because the server will at least instantiate and report the error. Previously init errors provided no recourse except a log message in the module log that the server could not be instantiated.

Startup Failures: Module Logging in wcErrors.txt

If the server fails to initialize at the system level (ie. Init() fails and the server never materializes), any errors are logged by the Web Connection Handler (.NET or ISAPI) in wcErrors.txt in the temp folder for the application. Startup errors logged there will include DCOM permissions errors, invalid Class IDs for your COM server, missing files or runtimes or any failure that causes the server to crash during OnInit().

These system level errors can also be triggered if your server's OnInit() code fails. OnInit() fires as part of the FoxPro server's Init() method which is the object constructor and if that fails the server instance is never passed back to the host. There's nothing that can be done to recover from an error like that except log it in``wcErrors.txtandwcTracelog.txt`.

Avoid putting code into OnInit()

To keep startup code to an absolute minimum, avoid writing code in your server's OnInit() method. OnInit() is meant to only set essential server operation settings that are needed for Web Connection servers to start. For everything else that needs to initialize use OnLoad(). In typical scenarios you shouldn't have any code in OnInit() beyond the generated default. This alone should avoid server startup crashes due to FoxPro code errors.

Startup Errors are logged to wcTraceLog.txt

Any code based errors during startup are logged to wcTracelog.txt file which is hooked into the OnInit() and OnLoad() processing of your server. Both methods are wrapped into exception handlers and if they are triggered by errors wcTraceLog.txt receives the error information. You can also implement OnError() to receive the captured exception and log or otherwise take action.

@info-icon-circle Folder Permissions for Logging

Make sure that the folder your application's EXE is running out of has read/write access rights for the IIS Server account that is running FoxPro application as it needs to be able to create and write the wcTracelog.txt file.

Any failures in OnInit() cause the server to not start so wcTracelog.txt and wcErrors.txt will be your only error source.

Errors in OnLoad() log to wcTracelog.txt but also display an error page in the browser with the error information (WC 6.15+). If OnLoad() errors occur the server will run not any further and only display the error message - requests are aborted until the problem is fixed.

Capturing Startup Errors

Beyond looking in wcTraceLog.txt you can also override the wwServer::OnError() method which receives the exception of the failure. In that message you can add custom logging and write out additional environment info into the log file.

You can also use the wwServer::Trace() method to write out information into the wcTraceLog.txt log. For thorny problems this allows to put messages into your code to see how far it gets and echo state that might help you debug the application. It's also useful in requests, but it's especially valuable for debugging startup errors.

The OnError method only serves as an additional error logging mechanism that allows you to capture the error and possibly take action on the error with custom code.

To implement:

FUNCTION OnError(loException)

*** default logging and some cleanup
DoDefault(loException)

*** Do something with the error

'*** Also write out to the local trace text log
THIS.Trace(loException.Message)

ENDFUNC

Add Tracing and Logging Into your Code

Finally if all of this still hasn't fixed your server to start up, you'll have to do some detective work. Your first line of defensive is always debug locally first in a similar environment: Make sure you debug in COM mode locally so you get as close as possible to the live environment.

If you really have to debug the live server you can use the wwServer::Trace() method to quickly write out trace messages to the wcTraceLog.txt file.

PROTECTED FUNCTION OnLoad

THIS.Trace("OnLoad Started")

THIS.InitializeDataBase()
THIS.Trace("DataBase Initialized")

THIS.SetLibraries()
THIS.Trace("Libiraries loaded")

...

THIS.Trace("OnLoad Completed")
ENDFUNC

By default the wwServer::Trace() method stores simple string output with a date stamp in wcTraceLog.txt in the application's startup folder.

Using this type of Print-Line style output you can put trace points in key parts of your startup sequence to see whether code is reached and what values are set.

Common Startup Errors

Common startup errors include:

Invalid COM Object Configuration

Make sure your servers are listed properly in web.config (.NET ) or wc.ini (ISAPI) and point at the right ProgIds for your COM servers. Also make sure the COM Servers are registered.

Folder Locations

Make sure that your application can run out of the deployed folder and has access to all the locations that it needs to read local data from. Make sure that paths are set in the environment and network drives are connected and so forth. Servers don't run under the interactive account so don't expect the same permissions and environment as your loggd in account especially if you depend on mapped drives - you probably have to map drives as part of your startup routine by checking if a drive is mapped and if not mapping it. Use SET PATH TO <path> ADDITIVE or set the system path to include needed folders.

Folder Permissions

Make sure that any files including data files you access on the local file system have the right permissions so they can be read and written to. Remember the IIS or DCOM permissions determine what account your application is running under.

Summary

Startup debugging of Web Connection is always tricky but Web Connection 6's new features make the process a lot easier by providing much better server configuration support to get your apps running correctly, and if things shouldn't go well on the first try provide you more error information so you can debug the failure more easily.

In addition to the better error trapping and error reporting you can also take pro-active steps to capture errors and log them out into the trace log for further investigation. Nobody wants to see their applications fail especially immediately after installation, but now you should be ready to deal with any issues that might crop. Now - go write some code!

IIS Server Authentication and Loopback Restrictions

$
0
0

Here's a common problem I hear from user installing Web Connection and trying to test their servers from the same live server machine:

When logged into your Windows server, IIS Windows authentication through a browser does not work for either Windows Auth or Basic Auth using Windows user accounts. Login attempts just fail with a 401 error.

However, accessing the same site externally and logging in works just fine, using Windows log on credentials. It only fails when on the local machine.

Loopback Protection on Windows Server

In the past these issues only affected servers, but today I just noticed that on my local Windows install with Windows 10 1803 I also wasn't able to log in with Windows Authentication locally. As if it isn't hard enough to figure out which user id you need on Windows between live account and local account, I simply was unable to log in with any bloody credentials.

Servers have always had this 'feature' enabled by default to prevent local access attacks on the server (not quite sure what this prevents since you have to log in anyway, but whatever).

When attempting to authenticate on a local Web site using a Windows account using username and password always fails when this policy is enabled. For Web Connection this specifically affects the admin pages that rely on Windows authentication for access.

This problem is caused by a policy called Loopback Protection that is enabled on server OSs by default. Loopback Protection disables authenticating against local Windows accounts through HTTP and a Web browser.

For more info please see this Microsoft KB entry:
https://support.microsoft.com/en-us/kb/896861

Quick Fix: Disable Loopback Check

The work around is a registry hack that disables this policy explicitly.

Starting with Web Connection 6.21 and later you can run the following using the Console running as an Administrator:

c:\> console.exe disableloopbackcheck

To reverse the setting:

c:\> console.exe disableloopbackcheck off

To perform this configuration manually find this key in the registry on the server:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa

and edit or add a new key:

DisableLoopbackCheck (DWORD)

then sent the value to 1 to disable the loopback check (local authentication works), or to 0 (local authentication is not allowed).

Summary

Web Connection 6.21 isn't here yet as of the time of writing of this post, but in the meantime you can just use the registry hack to work around the issue.

Web Connection 6.21 is here

$
0
0

We've released Web Connection 6.21 which is a relatively small update that has a few bug fixes and operational tweaks.

There are also a few new features, one of which is not Web specific but a very useful generic FoxPro enhancement feature.

  • wwDotnetBridge now supports Event Handling for .NET Objects
  • New .NET Runtime Loader for wwDotnetBridge
  • Console command for Disable Loopback Check

As always registered users of version 6.x, can download free registered version updates with download information that was sent by email. To check out Web Connection you can always pick up the shareware version:

Event Handling for wwDotnetBridge

This is a cool feature that opens up additional features of .NET to FoxPro. You can now use wwDotnetBridge to handle .NET events in an asynchronous manner. Similar to the behavior of async method calls that was introduces a few releases back you can now handle events in .NET and get called back, without having to register the .NET component and implement a COM interface.

This was previously not possible or at the very least required that you created a COM object and interface that mapped the .NET type and was registered. With this new functionality you can now use only wwDotnetBridge without any sort of special registration or even having to implement a FoxPro interface. You can simple create a proxy object that can handle select events that you choose to handle. Other events are simple ignored.

So what can you do with this? Here are a few example ideas:

  • Use SMTPClient natively and get notified on Progress events
  • Use WebClient and get notified of Web Events
  • Use the FileSystemWatcher on a folder and be notified of file updates

Basically most components that use events can now be used with wwDotnetBridge!

This feature was landed in the OSS version of wwDotnetBridge by a contributor, Edward Brey, who did most of the work for the event handling. Thanks Ed!

An Example

The following is an example using the .NET FileSystemWatcher object which allows you to monitor any file changes and updates in a given folder and optionally all of its subfolders.

The following monitors all changes in my c:\temp folder and all its subfolders which includes my actual Windows Temp folder - meaning it's a busy folder, lots of stuff gets written to temp files in Windows, so this generates a lot of traffic.

CLEAR
LOCAL loBridge as wwDotNetBridge
loBridge = GetwwDotnetBridge()

*** Create .NET File Watcher
loFW = loBridge.CreateInstance("System.IO.FileSystemWatcher","C:\temp")
loFw.EnableRaisingEvents = .T.
loFw.IncludeSubDirectories = .T.

*** Create Handler instance that maps events we want to capture
loFwHandler = CREATEOBJECT("FwEventHandler")
loSubscription = loBridge.SubscribeToEvents(loFw, loFwHandler)

DOEVENTS

lcFile = "c:\temp\test.txt"
DELETE FILE ( lcFile )  
STRTOFILE("DDD",lcFile)
STRTOFILE("FFF",lcFile)

* Your app can continue running here
WAIT WINDOW

loSubscription.Unsubscribe()

RETURN


*** Handler object implementation that maps the
*** event signatures for the events we want to handle
DEFINE CLASS FwEventHandler as Custom

FUNCTION OnCreated(sender,ev)
? "FILE CREATED: "
?  ev.FullPath
ENDFUNC

FUNCTION OnChanged(sender,ev)
? "FILE CHANGE: "
?  ev.FullPath
ENDFUNC

FUNCTION OnDeleted(sender, ev)
? "FILE DELETED: "
?  ev.FullPath
ENDFUNC

FUNCTION OnRenamed(sender, ev)
LOCAL lcOldPath, lcPath

? "FILE RENAMED: " 
loBridge = GetwwDotnetBridge()

lcOldPath = loBridge.GetProperty(ev,"OldFullPath")
lcPath = loBridge.GetProperty(ev,"FullPath")
? lcOldPath + " -> " + lcPath

ENDFUNC

ENDDEFINE

How does it work?

The event handling is based on a simple callback mechanism that uses a FoxPro event handler that is passed into .NET to be called back whenever an event occurs. The behavior is similar to the way the BINDEVENT() works in FoxPro with a slightly more explicit process.

Allows you to capture events on a source object, by passing in a callback handler that maps the events of the target object with corresponding methods on the handler.

To handle events:

  • Create an Event Handler Object
    Create a Custom class that implements methods that match the events of the .NET object that fires events with a On<EventName> prefix. Each 'asdd' method's parameters should match the parameters of the .NET event delegate. You only need to implement the methods you want to listen to - other events are ignored.

  • Create an Event Subscription
    Call loBridge.SubscribeToEvents() which binds a .NET event source object to a FoxPro event handler.

  • Continue running your Application
    Events are handled asynchronously in .NET and run in the background. Your application continues running and as events fire in .NET, the On<Event> methods are fired on the Event Handler object in FoxPro.

  • Unsubscribe from the Event Subscription
    When you no longer want to listen to events, call loSubscription.UnSubscribe(). Make sure you do this before you exit FoxPro or you may crash VFP on shutdown.

The key here is that you have to make sure that the .NET object that you want to handle events on as well the event handler stay alive because they essentially run in the background waiting for events to fire. This means storing these references on permanent objects like your main application's form or the FoxPro _screen or global variables.

Events are not as prominent in .NET as they used to be back in the high flying days of UI frameworks. Few operational components fire events, but many of the core system IO services have events you can handle. Progress events and completion are common.

Now we have the tools to use these event in the same easy fashion as all other .NET access with wwDotnetBridge.

New wwDotnetBridge .NET Runtime Loader

In this release the .NET runtime loader used for wwDotnetBridge has been updated to use the latest loader specific for .NET 4.0 and later. In past years we weren't able to use the new loader because the older versions still loaded .NET 2.0, but with the switch to 4.5 recently we can now take advantage of the new loader.

There are a couple of advantages here. The new loader is the officially recommended approach and provides a cleaner path to runtime loading, and more importantly it provides more error information. Previously the error information available from CLR loading was very cryptic as the runtime did not report the underlying error only a generic load failure error. The new version reports the underlying error information which is now passed to wwDotnetBridge.

This feature was also landed by Edward Brey in the OSS version of wwDotnetBridge.

Console Command for disabling the Loopback Check Policy for Authentication on Servers

On servers and now also on newer versions of Windows 10 (?), IIS enforces local loopback check policy which doesn't allow for local Windows authentication to work. If you try to access the Admin pages with authentication it will fail if the policy is applied. This can be a real pain when accessing the Web Connection Admin pages which by default rely on Windows Authentication to allow access to the Admin functionality.

The problem manifests if you try to login - you will not be able to use valid login credentials to actually authentication. Instead you get 404.3 errors which are auth errors.

Windows Servers have a policy that explicitly enable this Loopback Checking policy that effectively disables Admin access. Recently I've also noticed that on Windows 10 1803 I also couldn't access local addresses when using custom mapped local domains (ie. test.west-wind.com mapped to my localhost address).

There is a workaround for this issue by using a registry hack. This release now has a Console function that lets you set this registry setting without having to hack the registry manually:

console.exe DisableLoopbackChecking

I also wrote up a blog post with more information today:

Release Summary

Besides the marquee features, there are just a few small tweaks and bug fixes to the core libraries.

To see all that's changed in recent versions:

As always, let us if you have questions or run into issues with new features or old on the message board:

Enjoy...

this post created and published with Markdown Monster

Testing a Web Connection COM Server with FoxPro or PowerShell

$
0
0

This is a quick tip to a question that comes up frequently when testing runtime installations:

How can I quickly test whether my COM server is properly installed and working on a live server where I don't have the full Visual FoxPro IDE installed?

If you recall when you install Web Connection on a live server the preferred mode of operation is by using COM Mode where Web Connection servers are running as COM objects. If you ever run into a problem with a COM server not loading the first thing you want to do is check whether the COM server can be loaded outside of Web Connection either using a dedicated FoxPro IDE installation or if you only have the FoxPro Runtimes available using PowerShell.

Registering your COM Server

The first step for COM servers on a new server is that that they have to be registered in the Windows Registry. When you build during development Visual FoxPro automaqtically registers the COM server during build, but on a live server install you manually have to install the server.

Assuming you have an EXE server called MyApp, you can register your server using the following from a Command or PowerShell prompt running as an Administrator:

MyApp.exe /regserver

COM registration requires Admin access because the registration data is written into the HKEY_LOCAL_MACHINE key in the registry which is writable only as an Admin user. On a server this usually isn't an issue as you typically are logged on as as an Admin user, but on a local dev machine you typically need to start Command or PowerShell with Run As Administrator.

The /regserver Switch produces no Output

One problem with the /regserver switch is that it gives absolutely no feedback. You run it on your EXE and it looks like nothing happened regardless of whether it succeeded or failed. No output, no dialog - nothing.

COM Registration is Automatic with Web Connection Configuration Tooling

Note that if you're using the new Web Connection self-configuration tooling for applications using YourServer_Config.prg or Youserver.exe CONFIG, the COM registration is automatically run for you, so you don't have to manually register the server.

The naming of the server by default will be MyApp.MyAppServer - the naming is based on the project name plus the OLEPUBLIC server class name which is auto-generated when the project is created. Keep in mind that if you change names, of the project or class the COM server name will also change, which can break existing installations.

When it's all said and done you should have a COM server registered as MyApp.MyAppServer.

Re-Register COM Servers when the Server Interface Changes

Note that COM server registration is always required on first installation, but also when you make changes to the public COM interface of the server. COM registration writes ClassIds, ProgIds and Type library information into the registry and if the COM interface changes these ids often change along with the interface signatures. So remember to re-register your servers whenever properties or methods on the Server class are added or changed.

Testing the Server

So, to test the server and see if it's actually working, you can do the following using FoxPro code:

loServer = CREATEOBJECT("MyApp.MyAppServer")
? loServer.ProcessHit("")   && produces an error HTML page if it works

This produces an error page with a 404 Not Found header because no path was passed. This is usually all you need to check whether the server can load and run. It's easy to run and remember.

If you want to see a real response from the server you can instead specify a physical path to the request. For example, to test the Web Connection sample server I can do:

loServer = CREATEOBJECT("wcDemo.wcDemoServer")
? loServer.ProcessHit("&PHYSICAL_PATH=c:\wconnect\web\wconnect\testpage.wwd")
loServer = null

which should produce the output of the testpage.

Note it'll depend on the URL you hit whether additional parameters like query strings, form variables or other URL parts are required, but if you fire a simple GET request it should typically work.

No FoxPro Installation? Use PowerShell

On a live server however you often don't have the FoxPro IDE installed, so if you want to test a COM server you can't use FoxPro code. However, Windows Powershell can instantiate COM objects (and also .NET objects) and so we can use a powershell script to test the server.

$server =  new-object -comObject 'yourProject.yourProjectServer'
$server.ProcessHit("")

This should produce an HTML error page with an HTTP 404 response header that says page not found.

If you want to test a 'real' request, you can provide a physical path - here again using the Web Connection sample server as an example:

$server =  new-object -comObject 'wcDemo.wcDemoMain'
$server.ProcessHit("&PHYSICAL_PATH=c:\wconnect\web\wconnect\testpage.wwd")

# release the server (optional)
[System.Runtime.Interopservices.Marshal]::ReleaseComObject($server) | Out-Null

Note the rather nasty syntax to release a COM server from memory. Alternately you can shut down the PowerShell session to release the object as well.

Summary

Testing COM objects on an installed server is something that is often needed if you are troubleshooting an installation. A FoxPro installation is easiest, but if you only have a runtime install the PowerShell option is a good and built-in alternative.

Viewing all 133 articles
Browse latest View live