Quantcast
Channel: Rick Strahl's FoxPro and Web Connection Weblog
Viewing all 133 articles
Browse latest View live

How to work around Visual FoxPro's 16 Megabyte String Limit

$
0
0

If you take a look at the Visual FoxPro documentation and the System Limits you find that FoxPro's string length limit is somewhere around ~16mb. The following is from the FoxPro Documentation:

Maximum # of characters per character string or memory variable: 16,777,184

Now 16mb seems like a lot of data, but in certain environments like Web applications it's not uncommon to send or receive data larger than 16 megs. In fact, last week I got a message from a user of our Client Tools lamenting the fact that the HTTP Upload functionality does not allow for uploads larger than 16 megs. One of his applications is trying to occasionally upload rather huge files to a server using our wwHttp class. At the time I did not have a good solution for him due to the 16meg limit.

What does the 16meg Limit really mean?

The FoxPro documentation actually is not quite accurate! You can actually get strings much larger than 16megs into FoxPro. For example you can load up a huge file like the Office 2010 download from MSDN like this:

lcFile = FILETOSTR("e:\downloads\en_office_professional_plus_2010_x86_515486.exe")
? LEN(lcFile)

The size of this string: 681,876,016 bytes or 681megs! Ok that's a little extreme :-) but to my surprise that worked just fine; you can load up a really huge string in VFP if you need to. But when you get over 16megs the behavior of strings changes and you can't do all the things you normally do with strings.

Other operations do not work for example, the following which creates an 18meg string fails:

lcString = REPLICATE("1234567890",1800000)

with "String is too long to fit".

However the following which creates a 25 meg string does work:

lcString = REPLICATE("1234567890",1500000)
lcString = lcString + REPLICATE("1234567890",1000000)
? LEN(lcString)  && 25,000,000

The following is almost identical except it copies the longer string to another string which does not work:

lcString = REPLICATE("1234567890",1500000)
lcNewString = lcString + REPLICATE("1234567890",1000000) 

And that my friends is the real sticking point with large strings. You can create them, but once they get bigger than 16megs you can no longer assign them to a new variable. That might sound easy to avoid but it's actually tough to do. If you pass string to methods it's very likely that they are actually copied into temporary variables or added to another variable in a simulated buffer, and that is typically where large strings fail.

So what can we learn from this:

Doesn't Work:

  • Assigning a massive FoxPro string to another string fails
  • Some FoxPro commands like REPLICATE() can't create output larger than 16 megs

Works:

  • Assigning a massive string from a file using FileToString() works
  • Adding to the same large string (lcOutput = lcOutput + " more text") works and the string can grow
  • Calling methods that manipulate string work as long as the same string is assigned

There are some limitations but knowing that if you work with a single string instance that can grow large is actually good news. What this means is that if you're careful with how you use strings in FoxPro you can fairly easily get around the 16 meg string limit.

This actually worked well for me in the wwHttp class and the POST issue for larger than 16meg files but not string. Internally wwHttp uses a cPostBuffer property to hold the POST data. The failure was occuring in the send code which would copy the string to a temporary string and get the size, then pass that to the WinInet APIs. The fix for this was fairly easy: Rather than creating the temporary variables (which were redundant anyway) I simply used the class property directly throughout the code without any hand off and voila, now wwHttp supports POSTs for greater than 16 megs.

The code I use is kinda ugly because it's doing lots of string concatenation to build up the Post buffer. Something along these lines like this excerpt from wwHttp::AddPostKey:

************************************************************************
* wwHTTP :: AddPostKey
*********************************
***  Function: Adds POST variables to the HTTP request
***    Assume: depends on nHTTPPostMode setting
***      Pass: 
***    Return:
************************************************************************
FUNCTION AddPostKey(tcKey, tcValue, llFileName)
LOCAL lcOldAlias
tcKey=IIF(VARTYPE(tcKey)="C",tcKey,"")
tcValue=IIF(VARTYPE(tcValue)="C",tcValue,"")


IF tcKey="RESET" OR PCOUNT() = 0
   THIS.cPostBuffer = ""
   RETURN
ENDIF

*** If we post a raw buffer swap parms
IF PCOUNT() < 2
   tcValue = tcKey
   tcKey = ""
ENDIF

IF !EMPTY(tcKey)
   DO CASE
    *** Url Encoded
    CASE THIS.nhttppostmode = 1         
         THIS.cPostBuffer = this.cPostBuffer + IIF(!EMPTY(this.cPostBuffer),"&","") + ;
                            tcKey +"="+ URLEncode(tcValue) 
      *** Multi-part formvars and file
    CASE this.nHttpPostMode = 2
      *** Check for File Flag -  HTTP File Upload - Second parm is filename
      IF llFileName
           THIS.cPostBuffer = THIS.cPostBuffer + "--" + MULTIPART_BOUNDARY + CRLF + ;
            [Content-Disposition: form-data; name="]+tcKey+["; filename="] + JUSTFNAME(tcValue) + ["]+CRLF+CRLF
         this.cPostBuffer = this.cPostBuffer + FILETOSTR(FULLPATH(tcValue))
         this.cPostBuffer = this.cPostBuffer + CRLF
      ELSE
           this.cPostBuffer = this.cPostBuffer +"--" + MULTIPART_BOUNDARY + CRLF + ;
            [Content-Disposition: form-data; name="]+tcKey+["]+CRLF+CRLF
         this.cPostBuffer = this.cPostBuffer + tcValue
      ENDIF
   ENDCASE
ELSE
   *** If there's no Key post the raw buffer
   this.cPostBuffer = this.cPostBuffer +tcValue
ENDIF

ENDFUNC

AddPostKey can accept either a string value or a filename to load from. The file loading works by accepting the filename and then directly loading the file from within the function:

this.cPostBuffer = this.cPostBuffer + FILETOSTR(FULLPATH(tcValue))

This works fine because the file is directly loaded up into the buffer with no intermediate string variable.

You cannot however pass a string that is greater than 16 megs into this function because the code that adds the key basically does this with the tcValue parameter:

this.cPostBuffer = this.cPostBuffer + tcValue

which is assigning the larger than 16 meg string (tcValue in this case) to another variable and as discussed earlier that fails with "String too long to fit". Using a string to buffer your output to build up a larger string, there's no workaround for adding a larger than 16 meg string to another variable or buffer using variables. So my code now works with files loaded from disk, but not string parameters

Good but not good enough!

Files for Large Buffers

Based on the earlier examples I showed we know that we can easily load up massive content from a file. Thus FILETOSTR() offers an easy way to serve large files. Knowing that it's possible to build stream like class that allows you to accumulate string content in a file and then later retrieve it. To do this I created a wwFileStream class. Using the class looks like this:

*** Load library
DO wwapi

*** Create 20 meg string
lcString = REPLICATE("1234567890",1500000)
lcString = lcString + REPLICATE("1234567890",500000)

*** Create a stream
loStream = CREATEOBJECT("wwFileStream")

*** Write the 20meg  string
loStream.Write(lcString)

*** Add some more string data
loStream.WriteLine("...added content")

*** Now write a 16meg+ to the buffer as well
loStream.WriteFile("e:\downloads\ActiveReports3_5100158.zip")

*** Works
lcLongString = loStream.ToString()

*** 55+ megs
? loStream.nLength
? LEN(lcLongString)

*** Clear the file (auto when released)
loStream.Dispose()

Using this mechanism you can build up very large strings from files or strings regardless of what the size of the string is.

How wwFileStream works

Internally wwFileStream opens a low level file and tracks the handle. Each Write() operation does an FWRITE() to disk and the handle is released when the class goes out of scope.

The class implementation is pretty straight forward:

*************************************************************
DEFINE CLASS wwFileStream AS Custom
*************************************************************
*: Author: Rick Strahl
*:         (c) West Wind Technologies, 2012
*:Contact: http://www.west-wind.com
*:Created: 01/04/2012
*************************************************************

nHandle = 0
cFileName = "" 
nLength = 0


************************************************************************
*  Init
****************************************
FUNCTION Init()

this.cFileName = SYS(2023)  + "\" +  SYS(2015) + ".txt"
this.nHandle = FCREATE(this.cFileName)
this.nLength = 0

ENDFUNC
*   Init

************************************************************************
*  Destroy
****************************************
FUNCTION Destroy()
this.Dispose()
ENDFUNC
*   Destroy

************************************************************************
*  Dispose
****************************************
FUNCTION Dispose()

IF THIS.nHandle > 0
   TRY
   FCLOSE(this.nHandle)
   DELETE FILE (this.cFileName)
   CATCH
   ENDTRY
ENDIF
this.nLength = 0
ENDFUNC
*   Destroy

************************************************************************
*  Write
****************************************
FUNCTION Write(lcContent)
THIS.nLength = THIS.nLength + LEN(lcContent)
FWRITE(this.nHandle,lcContent)
ENDFUNC
*   Write

************************************************************************
*  WriteLine
****************************************
FUNCTION WriteLine(lcContent)
this.Write(lcContent)
this.Write(CHR(13) + CHR(10))
ENDFUNC
*   WriteLine

************************************************************************
*  WriteFile
****************************************
FUNCTION WriteFile(lcFileName)
lcFileName = FULLPATH(lcFileName)
this.Write(FILETOSTR( lcFileName ))
ENDFUNC
*   WriteFile

************************************************************************
*  ToString()
****************************************
FUNCTION ToString()
LOCAL lcOutput

FCLOSE(this.nHandle)
lcOutput = FILETOSTR(this.cFileName)

*** Reopen the file
this.nHandle = FOPEN(this.cFileName,1)
FSEEK(this.nHandle,0,2)

RETURN lcOutput
ENDFUNC
*   ToString()


************************************************************************
*  Clear
****************************************
FUNCTION Clear()

THIS.Dispose()
THIS.Init()

ENDFUNC
*   Clear

ENDDEFINE
*EOC wwFileStream 

The code is fairly self explanatory. The class creates a file in the temp folder and saves the handle. Any write operation then uses the file handle to FWRITE() either a string or the output from FILETOSTR(). ToString() can be called to retrieve the file, which closes the file, reads it then reopens it and points to the end. When the class is released the handle is closed and the handle released.

Using this class makes it easy to create large strings and hold onto them. The additional advantage is that memory usage is kept low as strings are loaded up only briefly and then immediately written to file and can be released. So if you're dealing with very large strings a class like this is actually highly recommended. In fact Web Connection uses this same approach for file based application output.

A matching MemoryStream Class

While the FileStream class works, it does have some overhead compared to memory based operation especially when you're dealing with small amounts of data. In the wwHttp class for example, I would not want to create a new wwFileStream for each POST operation. 99% of POST ops are going to be light weight, so it makes sense to only use the wwFileStream class selectively.

In order to do this I also created a wwMemoryStream class which has the same interface as wwFileStream and which uses a simple string property on the class to hold data. Since the classes have the same interface they are interchangable in use which makes them easily swappable.

The code for wwMemoryStream looks like this:

*************************************************************
DEFINE CLASS wwMemoryStream AS Custom
*************************************************************
*: Author: Rick Strahl
*:         (c) West Wind Technologies, 2012
*:Contact: http://www.west-wind.com
*:Created: 01/05/2012
*************************************************************

cOutput = ""
nLength = 0

************************************************************************
*  Destroy
****************************************
FUNCTION Destroy()
THIS.Dispose()
ENDFUNC
*   Destroy

************************************************************************
*  Dispose
****************************************
FUNCTION Dispose()
this.cOutput = ""
this.nLength = 0
ENDFUNC
*   Dispose

************************************************************************
*  Clear
****************************************
FUNCTION Clear()
this.cOutput = ""
this.nLength = 0
ENDFUNC
*   Clear

************************************************************************
*  Write
****************************************
FUNCTION Write(lcContent)
this.nLength = this.nLength + LEN(lcContent)
this.cOutput = this.cOutput + lcContent
ENDFUNC
*   Write

************************************************************************
*  WriteLine
****************************************
FUNCTION WriteLine(lcContent)
this.Write(lcContent)
this.Write(CRLF)
ENDFUNC
*   WriteLine

************************************************************************
*  WriteFile
****************************************
FUNCTION WriteFile(lcFileName)
this.Write(FILETOSTR( FULLPATH(lcFileName) ))
ENDFUNC
*   WriteFile

************************************************************************
*  ToString()
****************************************
FUNCTION ToString()
RETURN this.cOutput
ENDFUNC
*   ToString()

ENDDEFINE
*EOC wwMemoryStream 
It's now a cinch to create my class depending on the need.

In wwHttp I use wwMemoryStream by default since it addresses the 99% scenario. I added two properties to wwHttp: oPostStream and cPostStreamClass. The class is set to wwMemoryStream which is the default and can be overridden. Then internally when the time comes to create an instance of the stream class I use:

IF VARTYPE(this.oPostStream) != "O"
   this.oPostStream = CREATEOBJECT(this.cPostStreamClass)
ENDIF

This way the user can easily chose which of the streams to use simply by specifying:

loHttp.cPostStreamClass = "wwFileStream"

What's also nice about this approach is that the mechanism becomes extensible. If you want to store POST vars in another storage format you can simply create another subclass that implements the same methods and now can store your post variables in an INI file or in structured storage etc. Unlikely scenario for POST data, but very useful for other potential data storage scenarios.

BTW, the wwFileStream class is also a fairly useful generic file output tool. If you ever need to write output to files it provides a real easy OO way to do so, cleaning up after itself when you close it. I've used classes like (wwResponseFile) for years in various applications that need to create file output. It's very useful in many situations.

Summary

Even though Visual FoxPro has a 16 meg string limit, you now have some tools in your arsenal to work around this limit and work with larger strings. While you can work with larger strings, keep in mind that once you go past 16 megs you can't assign that string to anything else. It also gets much harder (and slower) to string manipulation on that string once you're beyond VFP's legal limit.

Still it's nice to know that the limit is not a final one and there are ways to work around it.

Resources



Transparent Bitmaps on Buttons and other Controls

$
0
0

Overlayed bitmaps in FoxPro have always been a royal pain, but in order to make a UI that looks clean, getting some sort of transparency to work with FoxPro controls is pretty important. Although FoxPro supports most common image formats including GIF and PNG that support transparency, the transparency isn't supported everywhere within the product.

Specifically the FoxPro Image control supports transparency of the various transparent image formats. If you load up an image control with a GIF or PNG file, the transparency is preserved and the image displays correctly.

However, displaying transparent images on other controls that have a Picture property, unfortunately doesn't work as smoothly. Specifically, here's an example of what I've been struggling with, which is buttons with associated icons:

ataleoftwobuttons[6] 

The first button doesn't show transparency and looks terrible, while the second button properly shows transparency and appears like it should using one of the approaches mentioned below. Similar situation arises with other controls that have picture properties like OptionButton, Checkboxes. ComboBoxes and Listboxes as well, although it's less of a problem there because the background of lists tend to be white. As a matter of fact all container controls have a picture property and the same rules apply.

By default only the Image control supports transparency properly for all GDI+ image formats.

The good news is that with a bit of trickery you can get FoxPro to render transparent images. The bad news is that there are tradeoffs and extra work required to make it work properly.

Old School Transparency Support: BMP Image

GDI+ image support for non-BMP images is actually a relatively new feature in FoxPro. GDI+ support for graphics was introduced with Visual FoxPro 8. Prior to version 8 only BMP images were directly supported in the product and there are a couple of mechanisms used for handling transparency with BMPs.

The most reliable way to get transparency in FoxPro involves using BMP files and a matching BMP Mask file. The mask file basically holds black dots for each of the pixels that should display and white content for anything that should be transparent. As you might imagine creating MASK files is a pain and adds clutter to your image management - anytime the image changes the mask has to be changed too and keeping things in sync is terrible. To me this has always been a non-starter.

You can also get transparency support with only BMP images. By default BMP images display white pixels as transparent, so as long as you can easily represent your transparent content as white it's easy to do. This is not always so cut and dried however because the image 'content' may also contain white pixels. In this case you can fill the white pixels with a slightly off white color like RGB(254,254,254).

Both of these approaches are painful but they are very reliable. Once you got your image set up it always works without fail. But both approaches require that you at the very least convert them to BMP format and potentially tweak your images for transparency or by creating a mask.

Resource Loading in FoxPro

There's another approach that works in most situations because of a quirk in FoxPro that I just found out about last week. FoxPro loads images as resources and you can trick FoxPro by loading images with transparency by first loading them into an Image control which as mentioned earlier does support transparency. The trick is that you have to load the image into the image control BEFORE it gets loaded into a picture property of another control. The quirk is that FoxPro caches image data once loaded from a path, so loading it into an image control first caches the transparent image which then gets loaded in subsequent loads that reference the same disk image (or compiled in image resource).

The image control has to only be loaded up and the control can then be released. The key is that this 'pre-load' has to happen BEFORE you load the same image into other controls. The image control doesn't need to stick around, so I use a small function that I only need to pass a file path to, create the Image control and set the picture property:

************************************************************************
*  LoadImageResource
****************************************
***  Function: Pre-load an image file so transparency is respected
***    Assume:
***      Pass:
***    Return:
************************************************************************
FUNCTION LoadImageResource(lcFile)
LOCAL loImg as Image
loImg = CREATEOBJECT("Image")
loImg.Picture = FULLPATH("bmps\search_small.png")
ENDFUNC
*   LoadImageResource

When the function exits the Image control is released, but since the image was effectively loaded into the control the image is now cached inside of VFP's image cache. Now when you load an image that has been called with the function into a Picture property of another control like a button it will display GIF and PNG transparency properly.

In an application I tend to have quite a few image resources I need to load up, so centralize one place when I do all the image pre-loading during startup. I create a function that I call from the application's startup, typically right after I display a startup splash screen.

The method is simply a conglomeration of LoadImageResource() calls:

************************************************************************
*  LoadImageResources
****************************************
***  Function: Method is used to load up transparent images into
***            UI. This method should be called on startup
***    Assume:
***      Pass:
***    Return:
************************************************************************
FUNCTION LoadImageResources()
LOCAL loImg as Image

LoadImageResource("bmps\search_small.png")
LoadImageResource("bmps\zipfile.gif")
…
ENDFUNC
*   LoadPictureResources

This is a simple approach that works fine and allows using PNG and GIF images with transparency on buttons and other controls.

Caveats with Preloading

In some instances however you can still end up with non-transparent resources. Basically FoxPro caches resources internally, but there are several ways that resources can get un-cached. Explicitly, if your code calls the CLEAR RESOURCES command anywhere, resources will get released and you'll lose any cached resources at that point. If you go back to a form with a transparent image on a control the image will display again with opaque background. Note that CLEAR RESOURCES is not affected by commands like CLEAR ALL or RELEASE ALL.

Implicitly, FoxPro can on certain occasions also unload its image resources internally when memory is really low. This should be very rare but it's possible and I have seen occasions when this does occur. But it's really rare so it's probably OK to ignore that possibility. If you're concerned about this scenario you can either have some code that calls LoadImageResources() or something like it during certain key application points that fire occasionally to explicitly force reloading of the images, or you can load resources as needed everytime a given form is fired up. This will ensure that resources are always fresh, but keep in mind that this causes some extra overhead.

Summary

I sure wish this was easier to accomplish or more obvious. For the longest time I didn't even know this hack of image pre-loading. The safest solution is to use BMP files, but it's definitely way more convenient to use Web ready files like PNGs or GIFs that support transparency both for designing the icons and re-using them across applications and platforms (Web/Desktop as I do).

Having to pre-load resources for transparent images in this fashion is a big hassle as opposed just loading up image resources directly. And worse it's an ugly hack that's impossible to discover on your own. I found a vague reference to this on a message board message somewhere so it's even hard to search for. Hopefully this blog entry will help clarfiy the subject a little better, even if it's just for me to remember in the future.

What really sucks about this problem is the fact that it DOES works with preloaded image resources. This seems to suggest that transparency actually works on controls but there's a small implementation detail in Visual FoxPro that prevents this from working all the time. It seems this would have been such a minor thing for the Fox team to fix and get working out of the box if the resources to do so had been there. But alas, we're stuck with a few minor and undiscoverable hacks. I take a hack over it not working at all which was what I thought until a week ago.

Thanks to Cesar Chalom, Joel Leach, Dragan Nedeljkovich for providing some ideas for this post and Mike Lewis for providing the inspiration to look further into this and getting me to a working solution in Html Help Builder.


Creating Gravatar Web Image Avatars in FoxPro

$
0
0

Gravatar.com is a great and very popular Web service based way to associate an image with user accounts that are based on email addresses. It's easy to use and can provide a nice touch of personalization with very little effort to just about any Web application that uses email addresses for users. This post shows how you can use Gravatar from within your FoxPro based applications.

I recently restyled my West Wind Message Board support site and one of things that I think makes the messages look a little more interesting and personal is having an avatar - an image - associated with users. But rather than asking users to upload images and storing them on my servers, I opted to use Gravatar which is a Web based service that can be used in many applications and is supported by a large number of popular Web sites already. Ultimately it's a better choice for users who only have to sign up once to associate an email address with a Gravatar to be used on many sites.

Here's what the Gravatar image looks like on a message board message:

Gravatar 
The Gravatar service  simply returns an an image for a given email address and some other parameters that are encoded and point at the Gravatar.com site. As a developer you configure Gravatar by creating a URL and embedding that URL into an <img> control's src attribute.

Above you can see the image that is associated with my Email address. The cool thing about Gravatar - if they have a Gravatar account already - is that once users have provided an email address they don't have to do anything else to associate their gravatar image with your application. If they have a gravatar configured the image is shown. If not, a default image is shown.

If an email address is sent to Gravatar that doesn't exist you can either provide another URL to an alternate image (some sort of default that's appropriate for your app) or you can let Gravatar serve it's default icon which looks like this:

GravatarDefault 

How it works

The Gravatar service works by calling a URL on their site and providing a few known parameters. An URL to retrieve an email address looks something like this:

http://www.gravatar.com/avatar/a4ec5092141a8649fe9d81527569a4c0?s=80&r=r

In an image control:

<img src="http://www.gravatar.com/avatar/a4ec5092141a8649fe9d81527569a4c0?s=80&r=" 
class="gravatar"
alt="Gratar Image based on email address" />

The components you need to send to get a Gravatar image are:

  • The Email Address encoded with an MD5 Hash
  • An image size
  • A default image if the email address is not registered with
  • A rating for the image (g, pg, r, x)

I've created a small reusable function that helps create Gravatar images in FoxPro. Using the function you can do:

? GravatarLink("rstrahl@west-wind.com",80) 
? GravatarLink("invalid@west-wind.com",60,,"r")

Here's the code for the GravatarLink function:

************************************************************************
*  GravatarLink
****************************************
***  Function: Creates an image URL for a given email address
***            that is registered with Gravatar.com.
***
***            Gravatar is a very popular avatar service that
***            requires only an email address to share a picture.
***            Used on many web sites so once you sign up your
***            picture will be used on many sites.
***    Assume: 
***      Pass: lcEmail - Email Address
***            lnSize - Image Size (square) 60-80 is usually good
***            lcDefaultImage - Url to an image if no match is found
***                             for email. Empty shows Gravatar's default
***            lcRating - g, pg, r, x   (Default: pg)
***    Return: URL to the Gravatar image
************************************************************************
FUNCTION GravatarLink(lcEmail,lnSize,lcDefaultImage, lcRating)
LOCAL lcDefaultImage

IF EMPTY(lnSize)
   lnSize = 80
ENDIF

IF !EMPTY(lcEmail)
    lcHash = LOWER(STRCONV(HashMd5(lcEmail),15))
ELSE
  lcHash = ""
ENDIF
IF EMPTY(lcDefaultImage)
   *** Gravatar default image displays
   lcDefaultImage = ""
ELSE
   lcDefaultImage = "&d=" + UrlEncode(lcDefaultImage)
ENDIF   
IF EMPTY(lcRating)
   lcRating = "pg"
ENDIF   

lcUrl = "http://www.gravatar.com/avatar/" + lcHash + "?" +;
       "s=" + TRANSFORM(lnSize) +;
       "&r=" + lcRating +;
       lcDefaultImage

RETURN lcUrl
* GravatarLink

The code is pretty straight forward - it basically builds up a URL as a string and adds the components provided by the parameters. The trickiest part of this is the MD5 encoding of the Email address. The MD5 hash creates a binary value which is then turned into a HexBinary string with STRCONV().

In order to encode the email address as an MD5 hash I use the following routine based on the content from the FoxPro Wiki a long while back:

************************************************************************
* wwAPI :: HashMD5
****************************************
***  Function: retrieved from the FoxWiki
***            http://fox.wikis.com/wc.dll?fox~vfpmd5hashfunction
***    Assume: Self standing function - not part of wwAPI class
***      Pass: Data to encrypt
***    Return: 
************************************************************************
FUNCTION HashMD5(tcData)

*** #include "c:\program files\microsoft visual foxpro 8\ffc\wincrypt.h"
#DEFINE dnPROV_RSA_FULL           1
#DEFINE dnCRYPT_VERIFYCONTEXT     0xF0000000

#DEFINE dnALG_CLASS_HASH         BITLSHIFT(4,13)
#DEFINE dnALG_TYPE_ANY          0
#DEFINE dnALG_SID_MD5           3
#DEFINE dnCALG_MD5        BITOR(BITOR(dnALG_CLASS_HASH,dnALG_TYPE_ANY),dnALG_SID_MD5)

#DEFINE dnHP_HASHVAL              0x0002  && Hash value

LOCAL lnStatus, lnErr, lhProv, lhHashObject, lnDataSize, lcHashValue, lnHashSize
lhProv = 0
lhHashObject = 0
lnDataSize = LEN(tcData)
lcHashValue = REPLICATE(CHR(0), 16)
lnHashSize = LEN(lcHashValue)


DECLARE INTEGER GetLastError ;
   IN win32api AS GetLastError

DECLARE INTEGER CryptAcquireContextA ;
   IN WIN32API AS CryptAcquireContext ;
   INTEGER @lhProvHandle, ;
   STRING cContainer, ;
   STRING cProvider, ;
   INTEGER nProvType, ;
   INTEGER nFlags

* load a crypto provider
lnStatus = CryptAcquireContext(@lhProv, 0, 0, dnPROV_RSA_FULL, dnCRYPT_VERIFYCONTEXT)
IF lnStatus = 0
   THROW GetLastError()
ENDIF

DECLARE INTEGER CryptCreateHash ;
   IN WIN32API AS CryptCreateHash ;
   INTEGER hProviderHandle, ;
   INTEGER nALG_ID, ;
   INTEGER hKeyhandle, ;
   INTEGER nFlags, ;
   INTEGER @hCryptHashHandle

* create a hash object that uses MD5 algorithm
lnStatus = CryptCreateHash(lhProv, dnCALG_MD5, 0, 0, @lhHashObject)
IF lnStatus = 0
   THROW GetLastError()
ENDIF

DECLARE INTEGER CryptHashData ;
   IN WIN32API AS CryptHashData ;
   INTEGER hHashHandle, ;
   STRING @cData, ;
   INTEGER nDataLen, ;
   INTEGER nFlags

* add the input data to the hash object
lnStatus = CryptHashData(lhHashObject, tcData, lnDataSize, 0)
IF lnStatus = 0
   THROW GetLastError()
ENDIF


DECLARE INTEGER CryptGetHashParam ;
   IN WIN32API AS CryptGetHashParam ;
   INTEGER hHashHandle, ;
   INTEGER nParam, ;
   STRING @cHashValue, ;
   INTEGER @nHashSize, ;
   INTEGER nFlags

* retrieve the hash value, if caller did not provide enough storage (16 bytes for MD5)
* this will fail with dnERROR_MORE_DATA and lnHashSize will contain needed storage size
lnStatus = CryptGetHashParam(lhHashObject, dnHP_HASHVAL, @lcHashValue, @lnHashSize, 0)
IF lnStatus = 0
   THROW GetLastError()
ENDIF


DECLARE INTEGER CryptDestroyHash ;
   IN WIN32API AS CryptDestroyHash;
   INTEGER hKeyHandle

*** free the hash object
lnStatus = CryptDestroyHash(lhHashObject)
IF lnStatus = 0
   THROW GetLastError()
ENDIF


DECLARE INTEGER CryptReleaseContext ;
   IN WIN32API AS CryptReleaseContext ;
   INTEGER hProvHandle, ;
   INTEGER nReserved

*** release the crypto provider
lnStatus = CryptReleaseContext(lhProv, 0)
IF lnStatus = 0
   THROW GetLastError()
ENDIF

RETURN lcHashValue
ENDFUNC
* HashMD5

With this function in place it's now a snap to embed Gravatar images into HTML based applications. In a Web Connection (or any other template/script based environment that uses Fox code) you can now simply do:

<img src="<%= GravatarLink(poUser.Email,80,,"pg") %>" class="gravatar" />

And you're off to the races.

On the West Wind MessageBoard

If you're a user of the West Wind Message Board I recommend you head over to Gravatar.com and hook up your Avatar image if you are interested in showing an image for any of your messages. Go ahead post a message, and see your Gravatar pop up. Once you have a Gravatar you may find that a lot of sites where you participate also already use Gravatar images so this will be useful in a lot of places beyond just the message board or your own applications.

GravatarLink() is now also part of the wwUtils library in Web Connection and the West Wind Client Tools.


Global Values in Web Connection

$
0
0

Here's a question that comes up quite frequently about Web Connection: How do I create and manage global variables effectively in a Web Connection application?

Web Connection is somewhat unique in terms of Web back end solutions in that it actually has a dedicated server component and so has some 'state' - server state - that persists between individual requests. There are a number of ways that you can store 'global' data in a Web Connection instance:

  • PUBLIC variables
  • Properties on the wwServer instance
  • The wwServer::oGlobals Property
  • The wwServer::oResources Dictionary
  • The wwServer::oCache Dictionary

But before we delve into the details of each of these mechanism it's important to understand what 'global' means in the context of a Web Connection application.

Global scope in Web Connection

Every Web Connection application starts out with one or more server instances that stick around as long as the instance is running. An instance is always tied to a specific EXE or DLL: The FoxPro IDE in development mode, a standalone FoxPro EXE when running in deployed file mode, or as COM EXE or DLL in COM mode.

In file mode this means the instance is running sitting on a READ EVENTS loop, in COM mode it means the COM instance instantiated by the client IS the actual server instance. In COM mode (both ISAPI and .NET and EXE or DLL modes) Web Connection runs on a maintained pool of instances that are managed so these instances actually stay alive as long as the server instance is live.

The important point here is that each EXE or DLL loaded instance it's its own silo of data that is separate from another instance. So while you can create global data using any of the mechanisms used in the list above, they are global only within the context of the current instance. Each Web Connection server has its own instance of global data - there's no in memory sharing between instances. The only way you can share data between instances is to store data to a database, disk or some other inter-process communication mechanism.

That said, it's still quite useful to have global data in an application. Global data is good for resources that are expensive to load up repeatedly. For instance SQL Server connections are slow to connect each time, but very fast once open and already connected so it makes sense to store a SQL connection globally so it can be used across multiple requests. Other objects like COM components or any sort of cached memory based data also qualifies.

How Web Connection Instances are loaded

Web Connection instances are tied directly to a wwServer instance. In File Mode an instance of your wwServer subclass is created in the startup code in YourAppMain.prg which looks like this:

   *** Load the Web Connection class libraries
   DO WCONNECT
 
   *** Load the server - wc3DemoServer class below
   goWCServer = CREATE("wcDemoServer")
 
   IF !goWCServer.lDebugMode
         SET DEBUG OFF
         SET STATUS BAR OFF
         SET DEVELOP OFF
         SET RESOURCE OFF
         SET SYSMENU OFF
   ENDIF
 
                   
   IF TYPE("goWCServer")#"O"
      =MessageBox("Unable to load Web Connection Server",;
         MB_ICONEXCLAMATION,"Web Connection Error")
      RETURN
   ENDIF
   
   *** Make the server live - Show puts the server online and in polling mode
   READ EVENTS

The instance is created which loads the server instance and the server form (if enabled) and then pops into a READ EVENTS which keeps the server alive.

In COM Mode the startup code goes away and the server instance is directly created via COM:

loServer = CREATEOBJECT("wcDemo.wcDemoServer")
loServer2 = CREATEOBJECT("wcDemo.wcDemoServer")

The server can be started as an EXE or a DLL, but the behavior is the same. If I create multiple instances like above I can set properties on each and they are treated independently. Any PUBLIC data in each of these servers too are completely independent of each other.

Don't treat Global data like you do in Desktop Application

A common misconception for newbies to Web development is that you can use PUBLIC variables or even global cursor to persist data on a per user basis. Unlike desktop applications, which are typically tied to a specific user, Web applications serve requests for many users. Each Web Connection instance isn't tied to a specific user so anytime a request comes in it can come from any user that's using the system. If multiple instances are running the same user may not even come back the same instance in two consecutive requests.

In short, global data for user data storage - both memory or using temporary cursors - in Web applications is a terrible idea in most cases. The exception is if you decide on some useful naming scheme to do but even then persistence across requests should be avoided whenever possible.

Global scope then is only really useful for storing things that truly are application wide and preferably should only be used for things that really require it. Typically this means slow to load operations (like SQL connection loading for example) or pre-loading of data structures that can then be more easily reused later.

Storing Global Data in Web Connection

Let's look at the different approaches available from least desirable to more desirable.

PUBLIC: Avoid whenever possible!

In desktop applications PUBLIC variables are an easy (if frowned upon) mechanism for storing global data and this mechanism still works in Web Connection. However, due to the possibility of scope naming conflicts and the inability to effectively manage PUBLIC variables it's not a good idea to use them.

That said, some of Web Connection's internals actually use PUBLIC vars. Some library functions create global cached instances of objects that can be reused using PUBLIC. These objects use PUBLIC because they are generic and can be used outside of Web Connection (for example GetwwDotNetBridge() or GetwwRegEx()). Each of these functions create PUBLIC instances with __ prefixes to minimize naming conflicts and are used to cache resources that are expensive to create.

In general application development (outside of libraries) I otherwise can't see any good reason to use PUBLIC variables.

Properties on the Web Connection wwServer Instance

A Web Connection server instance is defined by its wwServer subclass implementation. This is the server class and the class that persists through the lifetime of the server instance. It has state and remains active through potentially many requests until it is shut down through the Web Connection administration interface or by IIS when you restart the Web Server or otherwise recycle an Application Pool.

A server class definition looks like this:

**************************************************************
****          YOUR SERVER CLASS DEFINITION                 ***
**************************************************************
DEFINE CLASS wcDemoServer AS WWC_SERVER OLEPUBLIC
*************************************************************  

oMyProperty = .NULL. …

and you can create any number of custom properties on your server instance.

The server is always available either via a global variable called goWCServer, or via the Process.oServer or simple Server variables inside of actual request processing in a wwProcess request or Web Control Framework page . If you create a custom property on wwServer and you want to access it in a process class or Web Control page you can always use:

Server.oMyProperty.DoSomeThing()

or

this.oServer.oMyProperty.DoSomeThing()

Using wwServer instance properties works and reasonably clean, but if you are running in COM mode, anytime you add a property to your class the ClassIds change for your COM server. This means you have to re-register your COM object on the server, which is a bit of a pain.

wwServer.oGlobal

In order to avoid the COM re-registering issue you can use the wwServer.oGlobal property and dynamically add properties at runtime. This object is merely an instance of a CUSTOM class attached to wwServer for the sole purpose of allowing you to hang new objects off it. To use it in code you can simply do:

this.oGlobal.AddProperty("oMyProperty",CREATEOBJECT("MyObject"))

This adds a property to the oGlobal instance which you can then use anywhere in your application:

this.oGlobal.oMyProperty.DoSomething()

There's nothing magical about this of course - it uses only VFP's native dynamic language features of runtime type extension. It does however sidestep the COM registration issue.

If you'd rather not use the .AddProperty() method at runtime and instead use a separate object that defines properties explicitly you can do that simply by assigning the oGlobal instance at runtime in OnInit() or OnLoad() of the server event.

For example, here I create a custom MyGlobalsContainerClass instance and assign it to wwServer.oGlobal and then use the custom class' configuration property later in code:

FUNCTION OnLoad()
 
this.oGlobal = CREATEOBJECT("MyGlobalsContainerClass")
this.oGlobal.Configuration  = CREATEOBJECT("StoreConfig") 
 
ENDFUNC

Note wwServer.oGlobal was introduced in Web Connection 5.62. In version prior you can simply add an .oGlobals property to your server class:

DEFINE CLASS wcDemoServer AS WWC_SERVER OLEPUBLIC
oGlobal = null
  FUNCTION OnInit()
this.oGlobal = CREATEOBJECT("Custom")
…
ENDFUNC
…   ENDFUNC

which is pretty much all that Web Connection does internally in the later versions.

Note that you can also add properties directly to the server object. wwServer derives from RELATION so it doesn't have an .AddProperty() method but you can use the ADDPROPERTY() function:

ADDPROPERTY(this,"oMyProperty",CREATEOBJECT("MyObject"))
this.oMyProperty.DoSomething()

Although this works just as well as the previous example maybe with slightly simpler syntax for the resulting object, I prefer using the oGlobal object instead as it is clearer what's happening. Both dynamic adds avoid COM object re-registration, though, so the choice is purely a preference.

 

wwServer.oResources

wwServer also has an wwServer.oResources dictionary of type wwNameValueCollection. Basically this is a memory based dictionary that can hold any kind of variable including objects. So rather than actually adding properties to a class you can add and access these global values using a simple dictionary type syntax.

this.oResources.Add("MyProperty",CREATEOBJECT("MyObject")

To use the resource entry you can then this from within Web Connection code.

loMyObject = Server.oResources.Item("MyProperty")
loMyObject.DoSomething()

The retrieval and usage syntax requires an intermediary since FoxPro doesn't allow for executing methods of return values directly, but otherwise the behavior is very similar to the approach using custom properties. The thing that's nice about the dictionary is that your object structure doesn't change at all and you can easily iterate over all items in the resource dictionary.

wwServer.oCache Global Caching

On a somewhat related but not directly relevant note there's also the wwServer::oCache object. This wwCache object allows storing string values for a specified expiration time. It stores data in a cursor or - optionally - in a fixed table that can be accessed by multiple instances. wwCache is limited to string values. Although the class is meant for caching you can effectively hijack it as a long term object store by setting a very long timeout for the cache expiry. If you use a fixed filename the cache persists to a table that can in fact work across instances and can survive an instance shutdown and startup.

To add an item to the cache from the server class:

this.oCache.AddItem("RssFeedOutput",lcRssFeedOutput,9999999)

and to read it back in a process class:

Server.oCache.GetItem("RssFeedOutput)

Cache is interesting, but it's somewhat limited in that it works only with strings unlike all the previous mechanisms which can store values and objects. Still it might be useful for some scenarios that require global data/state to be persisted and it is the only option that allows multiple instances to share data from within the framework.

Summary

There are a number of ways to store global variables in Web Connection which is somewhat unique for Web based environments. But just because you can doesn't mean that you should. Always be very clear about why you need to store data or objects globally and think long and hard about it. Global data consumes resources on the server and it's resources that always stay loaded. If you really need those resources frequently then it makes sense to have it stored globally. But if these resources are rarely used or they are not expensive to load up in the first place it might be easier and more maintainable to simply create them as needed.

There's little reason to use PUBLIC variables in Web Connection applications. Take advantage of the provided global stores from object properties hanging off the wwServer instance to the wwServer.oResources dictionary or if you're dealing with strings possibly the wwServer.oCache object. Lots of options abound.


Root Path Support with ~ in Web Connection Response Output

$
0
0

Here's a little not so well known tip that's useful if you're using Web Connection: When output is rendered, the output automatically expand URLs that use the ~/path syntax, expanding any URLs found to fully application relative URLs on the page.

What it does is that you can write URLs like this (assuming a virtual directory of wconnect):

<a href="~/WebControls/Helloworld.wcsx">Hello World</a>

and expand that URL out to:

<a href="/wconnect/WebControls/Helloworld.wcsx">HelloWorld</a>

~/ paths basically say: Fix up the ~ to mean the base path of my Web Application. This allows you to create application rooted URLs consistently regardless of which virtual directory or root folder the URL is called from.  This different than relative paths which depend on the current location of the page that is loaded in the browser - the path generated is a Web server root relative path that was generated by the server based on the active application's path.

This means if your app runs in a virtual directory called wconnect the root path resolves to /wconnect (root folder/wconnect subfolder). If you're running in the root of the site, the root path resolves to /. Why should this matter? If you're developing applications that target different environments the base path of the Application might change. For example, you might develop your application in a virtual directory for development, but deploy the live application into a root Web, or a virtual with different names. Using ~/ urls, no changes are required when the application is moved to the live server.

Web Connection 5.0 also introduced a code based equivalent in Process.ResolveUrl():

lcUrl = Process.ResolveUrl("~/WebControls/HelloWorld.wcsx")

which accomplishes the same thing for you in code so you can use this function for creating root relative paths from your code. This is especially useful for generic and reusable code which may not know where it is running from. The Web Control framework makes extensive use of ResolveUrl for any URL properties on which it always calls ResolveUrl(). ResolveUrl() ignores full path urls if passed and returns them as-is. Urls that contain ~/ in the path are translated.

Note that this feature started out as a feature specific to the the Web Control Framework, but it since then has migrated into the core framework and the wwPageResponse/wwResponseString objects to post-process in all output generated through the Response object. This means it works for Web Control Framework Pages, raw Response.Write() output, templates and scripts etc. It's only applied to HTML output when the Content Type is set to text/html.

Url Pathing Options

Note that this ~/ pathing is different than relative pathing using ../ or rooted pathing which starts with a / on the root site. Rather the ~ syntax creates an application relative URL that is always the same regardless of where your app runs.

Whenever possible when creating URLs in your applications both in markup and code you should choose paths in this order:

  1. Relative Paths
    Use relative paths whenever possible because they're the most portable. Relative paths (../css/standard.css or css/standard.css) tend to be the most portable because as long as your resources are relatively stored to each other they will always work. Problems with this arise only if the resources are moved relative to each other which is probably rare.
  2. Virtual Paths
    Use application relative paths because they are portable per application. Virtual path specifiers (~/css/standard.css which expands to /wconnect/css/standard.css) are always rooted to the application's root folder. In Web Connection you specify this value in the server configuration via the cVirtualPath in wc.ini or web.config using the VirtualPath key. The great thing about ~/ paths are that there's no relative pathing involved if you don't know where your request might be running from. They are also super useful for Web Control Framework User Controls that might be dropped into pages running for many different subfolders.
  3. Rooted Paths
    Never use hardcoded paths to site or even rooted Webs. You should NEVER EVER use hard coded URL paths to reference anything in your local site. While this will work in whatever environment you're working in it'll break as soon as you move the site. At that point you have lots of links to clean up and that's never a good idea.

Both relative paths and virtual paths are adaptable and the big advantage of these are that they are portable. If you move the site or move to a different virtual folder you don't have to change anything (ok - you have to change the cVirtualPath config property, for ~/).

Relative paths are native to HTML and are parsed as the page is loaded. ~/ paths are expanded on the server and if you're using the wwPageResponse class as your Response class in Web Connection (which is the default in Web Connection 5.0) then any embedded ~/ paths are automatically expanded for you.

How it works and What is Transformed

The ~/ parsing is implemented as a post-processing feature of the wwPageResponse and wwResponseString classes. Basically Web Connection by default renders all of its output into a string first before potentially rendering the output into other outputs. For example, when running in file based mode Web Connection's output goes into a temporary file, but Web Connection (5.0) first renders the output to a string before explicitly writing the output to a file later. This switch occurred in Web Connection 5.0 to allow more control over the HTTP output created - using a string allowed for header manipulation because the final output doesn't get written to IIS until the response is complete. This means you can now add headers and cookies even when output has already been written to the Response.

In Web Connection 5.0 the wwPageResponse class handles all primary output, and the ~/ processing is hooked up in the Render method like this:

*** Fix up ~/ paths with UrlBasePath
IF THIS.contentType = "text/html" AND VARTYPE(Process) = "O"
   lcBasePath = Process.cUrlBasePath
   IF !EMPTY(lcBasePath)
      this.cOutput = STRTRAN(this.cOutput,[="~/],[="] + lcBasePath)
      this.cOutput = STRTRAN(this.cOutput,[url(~/],[url(] + lcBasePath)
   ENDIF
ENDIF

The code first makes sure we're dealing with HTML content only and then looks to translates any embedded URL expressions in attributes (the first STRTRAN) and in CSS styles (the second url() based syntax).

Based on this Web Connection will expand:

<a href="~/webcontrols/helloworld.wcsx">Hello World</a>

to:

<a href="/wconnect/webcontrols/Helloworld.wwd">Hello World</a>

Notice thought that this will produce different results:

<a href="~/webcontrols/helloworld.wcsx">~/webcontrols/HelloWorld.wcsx</a>

which produces:

<a href="/wconnect/webcontrols/helloworld.wcsx">~/webcontrols/HelloWorld.wcsx</a>

Note that the second ~/ tag wasn't expanded because it's not defined inside of an attribute - the plain string just stays as designed. If you want that sort of thing to work you have to use <%= Process.ResolveUrl() %> instead:

<a href="~/webcontrols/helloworld.wcsx"><%= Process.ResolveUrl("~/webcontrols/HelloWorld.wcsx") %></a>

You can also expand paths in css tags like the following:

    <style>
        .salesitem
        {
            padding: 10px;
            padding-right: 30px;
            background-image: url(~/css/images/pin.gif);
        }
    </style>

which expands out to:

    <style>
        .salesitem
        {
            padding: 10px;
            padding-right: 30px;
            background-image: url(/wconnect/css/images/pin.gif);
        }
    </style>

It also works for inline styles:

<a style="background-image:url(~/css/images/help.gif)" 
href="~/webcontrols/Helloworld.wwd">Hello World</a>

which transforms into:

<a style="background-image:url(/wconnect/css/images/help.gif)" 
href="/wconnect/webcontrols/Helloworld.wwd">Hello World example</a>

Configuration

This feature is mostly transparent - there's nothing you have to set up or configure to make the render parse these values. There are two requirements though:

  • Make sure you're using wwPageResponse or the wwResponseString class for Response rendering
  • Make sure the VirtualPath property is set in web.config or wc.ini

wwPageResponse

wwPageResponse is Web Connection 5.0's default response class and unless you've explicitly overridden it this is the class used. If you're using an older version of Web Connection or you have an old application that was migrated to Web Connection 5.0 you might want to check and potentially switch to this class.

cResponseClass = "wwPageResponse"   && wwPageResponse40 for 4.x compatibility

in your Process class property definitions header.

VirtualPath Configuration

Process.ResolvePath and Process.cBaseUrl retrieve the application's base path from the VirtualPath setting in the YourApplication.ini file. Specifically it comes out of the process class configuration section. For example, here's the wwDemo configuration section:

[Wwdemo]
Datapath=C:\WWAPPS\WC3\wwDemo\
Htmlpagepath=c:\westwind\wconnect\
Virtualpath=/wconnect/

The VirtualPath key specifies the virtual path that is used. Note that this path is used for various other things as well so you should always set this in your applications. Web Connection's configuration Wizard automatically sets this but if you manually copy applications always make sure to set these values explicitly.

Summary

If you haven't used this feature before - take advantage of it. As small as this feature is, it's very useful and greatly simplifies creation of cleaner and consistent URLs in your application. The post parsing mechanism is totally transparent - you don't have to do anything to make this work. No changes are required other than making sure that the application's INI file is set.


FoxPro EXE DCOM Configuration on 64 Bit Systems

$
0
0

Visual FoxPro is a 32 bit application and when you register a VFP EXE COM server it gets registered in the 32 bit DCOM registry.  If you recall, you can create a FoxPro EXE server by creating a class and marking it as OlePublic:

DEFINE CLASS TestServer as Custom OLEPUBLIC

FUNCTION Foo()
RETURN "HELLO FOO!"

ENDDEFINE

You can then add the PRG to a FoxPro project and compile the server with:

BUILD EXE TestServer FROM TestServer

At this point you have a working DCOM server on your local machine that you built the project on. In order to register the server on another machine you'll have to explicitly register it with the following command:

TestServer.exe /regserver

Whenever you register an EXE COM server it is actually a DCOM server - meaning it can potentially be launched remotely from another machine. DCOM stands for Distributed COM but realistically any EXE server (out of process server) is essentially a DCOM server even if it's called locally - the same exact APIs are used to launch it and manage the RPC COM interface.

DCOM Configuration: Doing the Bitness Dance

If you register your EXE COM server on a 32 bit system you can easily access DCOM Configuration manager in Component Services with:

DCOMCNFG

by typing the above into the Windows Run box or typing Component Services and drilling to the DCOM Config section of Component Services. The Windows Component Manager will pop up and show you all your DCOM servers and their individual settings including (hopefully) your EXE COM server.

DCOMCNFG32[4]

Remember that by default your server will show up by the ProgId (ie. TestServer.TestServer) unless you change the server definition. If you have more than one COM server in your EXE only one will show up. For this reason it's a good idea to always add an explicit name to your server in the project's Project Info | Server's | Description property.

All this is fine and dandy on a 32 bit system. However, if you type DCOMCNFG into the Run box on a 64 bit system you will find… well, you won't actually find your server entry there. That's because when you type DCOMCNFG on a 64 bit machine you only see COM servers registered for 64 bit, which VFP COM servers are not.

To get around this you can explicitly launch the 32 bit version of Component Services with this more verbose command that you can type into the Windows Run box or command prompt:

mmc comexp.msc /32

Once you do this, you should again see your server show up in the configuration.

Why does this happen?

Windows registers 32 and 64 bit components in different places and DCOMCNFG only displays the 64 bit servers that are written into the 64 bit registry hive. VFP's COM registration code predates that and so it doesn't write some of the corresponding registry keys. DCOMCNFG works the same way for 32 and 64 bit components so any registration performed in the editor actually sets the same settings regardless of the bitness, but due to the registry differences VFP COM servers don't show up there.

If you're using West Wind Web Connection 5.60 and later, the COM registration tools actually provide proper registration for VFP servers so they do show up in the 64 and 32 bit DCOM registry. You can use the Server Configuration Wizard or the programmatic or command line COM registration tools provided with Web Connection via the Console.exe application to get this functionality.


Extensionless Urls with Web Connection and the IIS UrlRewrite Module

$
0
0

In the last few months I've gotten a lot of requests from people who want to use extensionless URLs in their Web Connection applications. Extensionless URLs are URLs that - as the name implies - don't have an extension, but rather terminate in what looks like a directory. For example, you might expose customer data with formats like these:

http://localhost/wconnect/customers
http://localhost/wconnect/customers/32131

As you can see the URLs here don't appear to be ending in a 'file name' like index.htm or MyPage.wcsx etc. Rather they end in what looks more like a directory path. While it's mostly semantics, there's a lot of thought on the Web surrounding 'clean urls' like the above, that better describe Web resources. Extensionless Urls tend to use the URL path rather to describe the 'routing' and parameters of a request rather than more traditional approaches using script file links coupled with query strings; although querystrings are supported and often necessary still even on extensionless Urls. When using extensionless URLs we often think about 'nouns' or 'resources' (ie. Customers) rather than actions (GetCustomer.wc or GetCustomerList.wc).

Long story short, extensionless URLs are not something that have been natively supported in Web Connection in the past. There are a number of ways that this can be accomplished - here are a couple:

  • Wildcard ScriptMaps
    Wildcard scriptmaps essentially allow you to route EVERY request the Web server receives to a specific extension like wc.dll or the .NET managed handler. You then need to override the default Web Connection processing to handle the incoming URLS.

  • The UrlRewrite Module for IIS7+
    IIS 7 provides for a downloadable UrlRewrite module that can be plugged into IIS and that provides a rules based engine that can either rewrite or redirect URLs. Url rewriting basically takes an incoming URL that matches and rewrites it as another - in the process changing all the path based IIS request data (like LogicalPath,PhysicalPath,ScriptName etc.), while keeping the other request data like QueryString() and Form() intact. Unlike a Redirect operation, a rewrite only changes the URL your application sees to the new URL. This process is better suited for Web Connection because it allows some control over what URLs specifically are routed to Web Connection.

For Web Connection applications Wildcard ScriptMaps are generally NOT a good idea because Wildcard scriptmaps map EVERYTHING to the Web Connection server including images, css, scripts, static HTML etc. and Web Connection (and any dynamic content engine really) has quite a bit of overhead compared to the static file services and the caching that IIS natively provides. This is really not a good match for Web Connection.

UrlRewrite on the other hand allows you to specifically create a rule that filters URLs and sends matches (or non-matches in our case) to another URL by 'rewriting' the URL - that is the paths are updated to reflect the new URL while the browser's address bar retains the original URL. This is much more efficient for Web Connection because it'll only get hit by requests that match the search criteria. The UrlRewrite approach is what I'll discuss in this post.

The IIS UrlRewrite Module

You can download the URL Rewrite Module for IIS7 and later by using the Microsoft Web Platform Installer. Find the UrlRewrite Module in the Products | Server and select Url Rewrite 2.0:

 WebPlatformInstaller

and install from there. The installer does all the work of pulling the module and any dependencies.

Once the module is installed you can enable it in your  Web Connection virtual or root directory by creating a rewrite rule. As most IIS 7 settings, rewrite rules are stored in web.config. Here's a rule that rewrites extensionless URLs:

    <system.webServer>
         <rewrite>
            <rules>
                <rule name="ExtensionLessUrls" patternSyntax="Wildcard" stopProcessing="true">
                    <match url="*.*" negate="true" />
                    <conditions>
                        <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
                        <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
                    </conditions>
                    <action type="Rewrite" url="UrlRewriteHandler.wwd" appendQueryString="true" />
                </rule>
            </rules>
        </rewrite>        
    </system.webServer>

The above is pretty much cut and paste - you only need to change the Rewrite action url to ensure it points to a valid URL for your virtual directory.

Alternately you can also enter rewrite rules from the IIS Management Console:

RewriteIISManagementConsole

What this rule says basically is this:

  • Don't match files that have a . in the name (ie. no extension)
  • Match files that are not a physical directory
  • Match files that are not a physical file
  • If true Rewrite the URL to UrlRewriteHandler.wwd and forward the query string data from the original url

In this case I specified that I want any extensionless URL to be re-written to UrlRewriteHandler.wwd which points at my wwDemo process class example which has a .wwd extension.

Note that this uses WildCard pattern matching - you can also use RegEx expressions matching instead. Here I look for a . in the URL, which will work as long as none of your paths in your site contain '.'.

Creating a FoxPro Handler for Extensionless Urls - UrlRewriteHandler

Prior to Web Connection 5.64 UrlRewriteHandler does not exist but as of 5.64 there are a couple of special handler methods in the wwProcess that can act as an endpoint for extensionless URLs. Basically when you create the Rewrite route you can point it at UrlRewriteHandler.wwd (use your own Process Class scriptmap here instead of wwd) and off you go.

If you're using a version prior to 5.64 you can implement UrlRewriteHandler and OnUrlRewrite like this either on your wwProcess subclass or on wwProcess itself as is the case for 5.64 and later:

************************************************************************
* wwProcess ::  UrlRewriteHandler
****************************************
***  Function: Handler called when a UrlRewrite occurs via 
***            Specify UrlRewriteHandler.wwd (use your extension)
***            as the endpoint for extensionless URLS and then 
***            implement OnUrlRewrite() in your process class.
************************************************************************
FUNCTION UrlRewriteHandler()
LOCAL lcOrigUrl, lnItems, loRewrite, lcQuery, lnX
LOCAL ARRAY laTokens[1]
 
*** Rewrite URL injects the original URL as a Server Variable
lcOrigUrl = Request.ServerVariables("HTTP_X_ORIGINAL_URL")
IF EMPTY(lcOrigUrl)
   lcOrigUrl = Request.GetExtraHeader("HTTP_X_ORIGINAL_URL")
   IF EMPTY(lcOrigUrl)
       *** or pull it off the querystring
       lcOrigUrl = Request.QueryString("url")
   ENDIF
ENDIF
 
*** Create result object
loRewrite = CREATEOBJECT("Empty")
ADDPROPERTY(loRewrite,"cOriginalUrl",lcOrigUrl)
 
*** Split path and query string
lnItems = ALINES(laTokens,lcOrigUrl,1 + 4,"?")
IF lnItems > 0
   ADDPROPERTY(loRewrite,"cOriginalPath",laTokens[1])   
ELSE
   ADDPROPERTY(loRewrite,"cOriginalPath",lcOrigUrl)   
ENDIF
 
*** Split path into Collection
loSegments = CREATEOBJECT("wwCollection")
lcVirtual = STRTRAN(LOWER(Request.GetVirtualPath()),"/","")
 
lnItems = ALINES(laTokens,loRewrite.cOriginalPath,1 + 4,"/")
FOR lnX = 1 TO lnItems
    lcToken = laTokens[lnX]
    IF !(LOWER(lcToken) == lcVirtual)
        loSegments.Add(lcToken)
    ENDIF
ENDFOR
 
ADDPROPERTY(loRewrite,"oPathSegments",loSegments)
 
THIS.OnUrlRewrite(loRewrite)   
 
ENDFUNC
*  wwProcess ::  UrlRewriteHandler

The RewriteUrl method acts as the endpoint for a URL like UrlRewriteHandler.wwd which should be plugged into the rewrite rule. When an extensionless URL is fired it then fires this method which in turn picks up the original URL via the HTTP_X_ORIGINAL_URL header that IIS injects into the Request data. This header contains the original URL which then method then forwards along with a couple of parsed variables, to the OnUrlRewrite method. The OnUrlRewrite method is a convenience method you can easily override when a hit occurs that receives this parsed object.

The loRewrite object parameter has the following properties:

Property Function
cOriginalUrl The original extensionless URL that triggered the rewrite. Full server relative path that includes the query string.
/wconnect/customers/3211?parm=val
cOriginalPath The original extensionless URL that triggered the re-write, without query string.
/wconnect/customers/3211
oPathSegments Each of the path's segments relative to the virtual directory or root in a wwCollection instance.
Two segments: customers and 3211
lcId = loRewrite.oPathSegments.Item(2)

which makes it fairly easy to do something useful with the data passed.

A simple example of an Extensionless Url Handler

Let's look at a somewhat simplistic example implementation of an Extensionless URL handler that basically routes extensionless requests to requests with a known extension.

It would allow you to effectively create a handler for your application that maps a URL like:

http://localhost/wconnect/TestPage

and route them to the TestPage method in your process class.

The implementation of such a handler can be very basic:

************************************************************************
* wwDemo ::  OnUrlRewrite
****************************************
FUNCTION OnUrlRewrite(loRewrite)

this.oRewrite = loRewrite

*** Assume second segment is our method name
IF loRewrite.oPathSegments.Count > 0
   lcMethod = loRewrite.oPathSegments.Item(1)   
   RETURN EVALUATE("THIS." + lcMethod  + "()")   
ENDIF

this.ErrorMsg("Invalid Route",;
   "Route values must at least include 1 segments relative to the virtual or root application")

ENDFUNC

Basically this code does little more than checking for the first segment (TestPage) and assuming that this segment  is the name of the method you want to call in the process class. It takes the method and then simply Evaluates the method. Since the rewrite keeps the QueryString() and Form() data intact the behavior of the method is almost the same as if you called it directly with:

http://localhost/wconnect/TestPage.wwd

There is a difference however, between a rewrite to this URL and directly accessing this URL: All the paths inside of the TestPage method on the rewrite point to UrlRewriteHandler.wwd - not the original Url or testpage.wwd because the rewrite rule points at UrlRewriteHandler.wwd. The original extensionless URL is only available as a parameter to the OnRewriteUrl(loRewrite) method. So things like Request.GetPhysicalPath(), GetLogicalPath(), GetExecutablePath(), GetCurrentUrl() etc. will point at UrlRewriteHandler.wwd, so be aware that if you simply want to map existing scriptmapped extensions to non-scriptmapped extensions there might be pathing issues.

Using multiple Path Segments

Let's expand on that last example with something a little more practical and consider creating a handler for the following two URLs which return customer information:

http://localhost/wconnect/customers

http://localhost/wconnect/customers/West+Wind+Technologies

Here I have two URLs that return a list of customers and a specific customer respectively. In order to do this, I need to make a slight change to the OnRewrite method to capture the loRewrite object and make it available to my handlers and store it in an oRewrite object:

oRewrite = NULL

************************************************************************
* wwDemo ::  OnUrlRewrite
****************************************
FUNCTION OnUrlRewrite(loRewrite)

this.oRewrite = loRewrite

*** Assume second segment is our method name
IF loRewrite.oPathSegments.Count > 0
   lcMethod = loRewrite.oPathSegments.Item(1)   
   RETURN EVALUATE("THIS." + lcMethod  + "()")   
ENDIF

this.ErrorMsg("Invalid Route",;
   "Route values must at least include 1 segments relative to the virtual or root application")

ENDFUNC

************************************************************************
*  wwDemo :: Customer
****************************************
***  Function:
***    Assume:
***      Pass:
***    Return:
************************************************************************
FUNCTION Customers()

lcId = ""
IF this.oRewrite.oPathSegments.Count > 1
    *** Retrieve second parameter segment and query
    lcId = this.oRewrite.oPathSegments.Item(2)
ENDIF

*** No id passed - display a list
IF EMPTY(lcId)
    SELECT * FROM tt_cust ;
       INTO CURSOR TQuery ;
       ORDER BY Company   
    lcHtml = HtmlDataGrid("TQuery")
ELSE
    SELECT * FROM tt_cust ;
       WHERE company = lcId ;
       INTO CURSOR TQuery
    lcHtml = HtmlRecord("TQuery")
ENDIF

THIS.Standardpage("Show Customer",lcHtml)
ENDFUNC
*   Customers

The Customers method is now implemented in such a way that it can look at the second path segment to determine the ID of the customer to look up. If no second segment exists - assume that a customer list gets displayed. If there is an ID segment - load the individual customer and display it.

Both of those URLs can now be handled fairly easily as you can see.

Note that this is just one example of how you can handle your routing. I suspect in other real-world scenarios the logic to determine which method needs to be fired in OnUrlRewrite() might be more complex especially if URL paths nest several levels deep. You can apply more complex rules in the OnRewriteUrl method to fit your needs.

Feedback Wanted

To be honest I haven't had much need to build extensionless URLs with FoxPro, so I don't have a real good idea what people are planning to accomplish with URLs. I'm pretty familiar with ASP.NET MVC style routing where you can add route parameters which is a possible future addition for Web Connection when using the Web Connection .NET Handler.

For the moment I'm curious to hear feedback on what you need to accomplish and/or whether the relatively simple implementation outlined here would address your needs or how you think it might be improved. Any feedback would be greatly welcome.


UTF-8 Encoding with West Wind Web Connection

$
0
0

For Web applications UTF-8 encoding has become fairly universal. According to WikiPedia:

"UTF-8 (UCS Transformation Format—8-bit[1]) is a variable-width encoding that can represent every character in the Unicode character set. It was designed for backward compatibility with ASCII and to avoid the complications of endianness and byte order marks in UTF-16 and UTF-32"

As FoxPro developers stuck with 255 character wide character sets, UTF-8 allows us to represent Unicode output fairly easily, especially since version 8 when STRCONV() introduced easy UTF-8 conversions to and from UTF-8, Unicode and ANSI character sets, making it super easy to create and parse UTF-8 into the current active character set.

UTF-8 solves a lot of problems with character set display issues on the Web especially and I'd highly recommend that you use UTF-8 as your output format for Web content from Web Connection or otherwise. Although early versions of Web Connection didn't do anything special with extended character sets and UTF-8 , more recent versions make it fairly easy to deal create content in UTF-8 format and parse UTF-8 Request data (form vars and query strings) just about automatically.

Setting up UTF-8 Encoding and Decoding in Web Connection

Starting with Web Connection 5.0, there are properties on both the Request and Response objects that allow UTF-8 transformations automatically. By default these aren't enabled so as to not break existing code, although enabling them is unlikely to cause a problem unless you explicitly set character set encoding via meta tags and actually encode your content already.

It's almost trivial to enable UTF-8 processing in Web Connection for all requests routed through a given Process class with this code:

************************************************************************
* wwDemo :: OnProcessInit
***************************
FUNCTION OnProcessInit
 
*** Explicitly specify UTF-8 encoding and decoding
Response.Encoding = "UTF8" 
Request.lUtf8Encoding = .t. ENDFUNC * wwDemo :: OnProcessInit

These two innocuous looking property assignments tell Web Connection to:

  • Encode all output going through the Response object to UTF-8
  • Decode all input from Form Variables and QueryStrings to use UTF-8 Decoding

Note that both of these require Web Connection 5.x and later and Response.Encoding is available only on the wwPageResponse class, which is the default  in Web Connection 5. It is not available for the older wwResponse/wwResponseFile/wwResponseString classes.

The above code to enable UTF-8 encoding is hooked to the wwProcess::OnProcessInit() method which if a custom implementation you can create in your custom wwProcess subclasses. This method is a per request hook that allows hooking tasks that need to fire on every request. Typically you set up things things like Session initialization (this.InitSession()) or checking authentication etc. Setting the request encoding is just another simple task. Once the properties are set on the Response and Request object all Response output and Request input is automatically UTF-8 parsed.

Response Encoding

The Response encoding works on the wwPageResponse class only which is a string based output generation mechanism. When UTF-8 encoding is enabled your code basically builds up the Response into a string throughout your request code. Whether you explicitly call Response.Write() or other low level Response method, or whether a more high level handler like the Web Control Framework or the script and template engines create the output doesn't really matter - it all ends up as a string on the Response.cOutput property.

Once the process method is complete Web Connection assembles the final Response output by combining the Response.cOutput string plus the request headers into a complete response, which is then UTF8 encoded and returned back to IIS via the Web Connection .NET Handler or the ISAPI module.

The result is a fully UTF8 encoded response that properly displays any upper ASCII characters.

While it's possible to send extended characters back without UTF-8, it's much more complex for clients - especially non-Web Browser clients - to deal with custom character sets. The server would have to specify which character set was used (such as Windows-1252) and browsers have to parse and decode the charset. UTF-8 simplifies this because UTF-8 is fairly easy to automatically map to the active character set in the client's OS. If you received an raw UTF-8 response in FoxPro (say by calling a UTF-8 URL with wwHTTP) it's as easy as calling STRCONV(lcOutput,11) to turn it into FoxPro usable ANSI characters which is much easier than trying to match a specific encoding type and character set.

UTF-8 makes it much easier to share text data spanning potentially many different character sets using a single encoding mechanism. This is why it's a good idea to always create Web output using UTF-8, rather than any other encoding.

Request Encoding

If you use UTF-8 Response encoding you will actually need to match it with UTF-8 Request Decoding. Why? Because if you embed a URL like this in a document:

http://localhost/wconnect/testpage.wwd?address=TamStraße

you'll find that this URL is turned into a UTF-8 encoded URL that looks like this by the browser when clicked:

http://localhost/wconnect/testpage.wwd?address=TamStra%C3%9Fee

Notice that there are TWO escaped values next to each other for the ß character: %C3%9F which is the UTF-8 encoded character. Why? Well, your document is UTF-8 encoded and so the URL sent to the server also is. The same goes for form data you enter into a form. In other words, the Request data is UTF-8 encoded.

In order to properly decode those UTF-8 values you need to use:

Request.lUtf8Encoding = .t.

or else Request.Form() or Request.QueryString() will return weird looking characters for any extended characters in strings. For example if I type:

TamStraße

into a textbox and retrieve the value when lUtf8Encoding = .F. I'll get:

TamStraße

which is basically the UTF-8 encoded version which is clearly not what you want. You can manually fix this easily enough:

? STRCONV("TamStraße",11)

which properly produces TamStraße, but the easier solution is to just set Request.lUtf8Encoding = .T. and have this happen automatically.

Note that UTF8 encoding is common in the browser and for most Web pages it's considered the default if no other encoding is specified. One big issue with character encoding is that the server doesn't always receive information on what encoding is used. In fact most Form posts don't specify the encoding so you'd have to guess. But since in most applications you control the page generation (ie. you generate the page that posts back) you know what the encoding of the parent page is which in turn determines the POST encoding and querystring encoding for embedded links.

The Moral of the Story is: Use UTF-8

If you run into any problems with character encoding in your Web Connection applications, the most likely culprit is that you forgot to properly encode your content. If this happens to you the easiest way to fix it almost always is to opt to output everything in UTF-8. If you're dealing with any extended character formats, or even multi-cultural applications, UTF-8 will always work. Whether FoxPro can map all characters received from the Web to the current character set - that is another issue altogether, but that's a tricky limitation of FoxPro that has no easy solutions short of switching character sets at runtime.



Custom Manifest Files in Visual FoxPro EXEs

$
0
0

Here's a something I didn't know about Visual FoxPro: If you place a Windows Manifest file in the same folder as the FoxPro project you are compiling, you can embed that manifest into the compiled EXE. By default FoxPro will generate it's own manifest file - one problem with this is that if an EXE has an embedded manifest file external manifest files are ignored. So effectively that means that external manifest files are not executed by FoxPro - only the internal compiled in one is.

All you have to do is to create a valid manifest file with the same name as the output EXE and then put that manifest file into the same folder as the PJX file.

So if I have an EXE called:

DotNetWsdlGeneratorConsole.exe

I have to create a matching manifest file in the Project's folder:

DotNetWsdlGeneratorConsole.exe.manifest

When you compile and now use any sort of resource editor you can take a look at the generated manifest in the FoxPro EXE. Cool, eh?

What can I use a Manifest For?

Manifest files are useful for a number of things. They can tell the OS under which security context to load, add self-registering COM components and a host of other things. It often also contains information on how to render themes under Windows XP (I'll get back to that in a minute).

Let's talk about two things that are quite common requests in the FoxPro Community:

Force your application to run in administrative mode

If you're running in Windows Vista, 7 or 8, User Account Control forces all users to run as Standard users even if you are effectively marked as an Administrator. When UAC is on, applications that require Administrative rights should always prompt for this on startup explicitly. If you need admin rights you will be automatically prompted. The only way that this can be done is via a manifest file.

If you don't do this, and you don't have the appropriate rights your app will start but fail on required administrative operations, which is problematic - it's better to notify the user on startup to elevate the rights of the Application.

I need to do this for the West Wind Web Service Proxy Generator tool, the Wizard of which lives in an EXE called:

DotNetWsdlGeneratorConsole.exe

So - in the project directory - I create a manifest file:

DotNetWsdlGeneratorConsole.manifest.exe

that looks like this:

<?xml version="1.0" encoding="utf-8"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1"
          manifestVersion="1.0">
  <assemblyIdentity name="DotNetWsdlGeneratorX.exe"
                    version="1.0.0.0"
                    processorArchitecture="x86"
                    type="win32" 
                    />
    <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2">
      <security>
        <requestedPrivileges xmlns="urn:schemas-microsoft-com:asm.v3">
          <requestedExecutionLevel level="requireAdministrator"
                                   uiAccess="false" />
        </requestedPrivileges>
      </security>
    </trustInfo>

</assembly>

These settings force ask for Administrator rights when launching and should force the app to pop up a UAC dialog that asks for permission to run this application.

  • Now build the EXE - in my case I build it into a separate folder (the manifest file is not required).
  • And run the EXE from Explorer when UAC is on

When I do I see:

UacDialog

Unfortunately, notice the Publisher Unknown - this because my EXE isn't properly signed, which is OK in my case, but if you need it here is some background information on how to sign an EXE:

http://www.wintellect.com/cs/blogs/jrobbins/archive/2007/12/21/code-signing-it-s-cheaper-and-easier-than-you-thought.aspx

Registrationless COM Activation

Another common task for manifest files is registrationless COM, which allows you to define each COM object you need access to and 'register' it inside of the manifest file. It's very easy to do this with code like the following:

<?xml version="1.0" encoding="utf-8"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
  <assemblyIdentity name="DotNetWsdlGeneratorConsole.exe"
                    version="1.0.0.0"
                    processorArchitecture="x86"
                    type="win32" 
                    />
  <file name="multithreadserver.dll">
  <comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}"
              threadingModel="Apartment"
              progid="multithread.multithreadserver"
              description="multithread.multithreadserver" />
  </file>

  <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2">
    <security>
      <requestedPrivileges xmlns="urn:schemas-microsoft-com:asm.v3">
        <requestedExecutionLevel level="requireAdministrator"
                                 uiAccess="false" />
      </requestedPrivileges>
    </security>
  </trustInfo>

</assembly>

Notice the file and comClass elements in the configuration. The file describes the file name (in the same folder) and comClass describes the clsid and progid and threading model for the DLL to be loaded. You can have multiple comClass elements for multiple COM objects contained with in the one physical file on disk.

FoxPro's Default Manifest

It's interesting to take a look and see what a FoxPro EXE's manifest actually looks like. If you have Visual Studio installed (or any other tool that can extract and view Embedded Resources) you can take a look at the EXE file.

To do this:

  1. Open Visual Studio
  2. File | Open | File and pick your compiled EXE file

If you do this Visual Studio opens the Resource editor for the EXE since manifests are embedded as resources. Here's what it looks like in VS2012:

ResourcesVs

If you drill into the '1' file - the manifest you get a split binary/text view of the data. You can cut and paste the text into a new File | New | Xml File window to view the manifest a little bit easier.

The resulting XML looks like this:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1"
          manifestVersion="1.0">
  <assemblyIdentity
    version="1.0.0.0"
    type="win32"
    name="Microsoft.VisualFoxPro"
    processorArchitecture="x86"
/>
  <description>Visual FoxPro</description>
  <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
    <security>
      <requestedPrivileges>
        <requestedExecutionLevel level="asInvoker" />
      </requestedPrivileges>
    </security>
  </trustInfo>
  <dependency>
    <dependentAssembly>
      <assemblyIdentity
        type="win32"
        name="Microsoft.Windows.Common-Controls"
        version="6.0.0.0"
        language="*"
        processorArchitecture="x86"
        publicKeyToken="6595b64144ccf1df"
        />
    </dependentAssembly>
  </dependency>
</assembly>

 

Note that FoxPro automatically adds the requestedPrivileges attribute, but it's defaulted to 'asInvoker' which means it runs in the default system context of the user (which is the default for Windows). In effect this is not necessary, but for whatever reason the FoxPro developers thought this should be there.

The default manifest also has a dependency on the Windows Common controls assembly which forces FoxPro to uses the latest version of the common controls. This is important on XP, because it allows FoxPro to use themes on XP, which without this entry would not work.

So, for this reason, it's important that you add that last dependency into your custom manifest files as well.

So, my complete custom manifest file now looks like this:

<?xml version="1.0" encoding="utf-8"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1"
          manifestVersion="1.0">
  <assemblyIdentity name="DotNetWsdlGenerator.exe"
                    version="1.0.0.0"
                    processorArchitecture="x86"
                    type="win32"
                    />

  <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2">
    <security>
      <requestedPrivileges xmlns="urn:schemas-microsoft-com:asm.v3">
        <requestedExecutionLevel level="requireAdministrator"
                                 uiAccess="false" />
      </requestedPrivileges>
    </security>
  </trustInfo>

  <file name="multithreadserver.dll">
    <comClass clsid="{af2c2811-0657-4264-a1f5-06d033a969ff}"
              threadingModel="Apartment"
              progid="multithread.multithreadserver"
              description="multithread.multithreadserver" />
  </file>

  <dependency>
    <dependentAssembly>
      <assemblyIdentity
        type="win32"
        name="Microsoft.Windows.Common-Controls"
        version="6.0.0.0"
        language="*"
        processorArchitecture="x86"
        publicKeyToken="6595b64144ccf1df"
        />
    </dependentAssembly>
  </dependency>

</assembly>

and if I build my EXE and put the manifest file into the same folder as the project file, I get this exact manifest embedded into my FoxPro project.

Hope some of you find this useful - I know I have as I have a couple of administrative apps that simply work better with admin rights enabled, instead of having to write instructions somewhere that "you have to run this application with administrative rights." Sweet.


wwDotnetBridge for FoxPro .NET Interop is now Free and Open Source

$
0
0

I'm happy to announce that as of a couple of days ago, wwDotnet Bridge - our library to access .NET components from Visual FoxPro code - is now free and open source. You can find out more wwDotnetBridge here:

What is wwDotnetBridge?

wwDotnetBridge is a library that makes it easy for FoxPro to access .NET code. While there's a native mechanism available to call .NET components via .NET COM Interop, that mechanism is woefully limited to components that are actually registered to COM, which is very few components, except those you create yourself. wwDotnetBridge provides an easy mechanism to access just about any .NET component regardless of whether it's registered to COM or not and provides access to methods and classes that native COM Interop can't handle directly.

wwDotnetBridge provides a custom .NET Runtime host that is loaded into Visual Foxpro. This custom host allows instantiating of .NET Components without requiring COM registration as it manages the instantiation of the runtime and providing a proxy into the .NET framework. wwDotnetBridge still uses COM Interop - the objects you interact with are still COM objects and exhibit the same behaviors as objects used with native COM Interop, but the instantiation process is different going through wwDotnetBridge.

To create custom in wwDotnetBridge you use syntax like this:

loBridge = CreateObject("wwDotNetBridge","V4") 
loBridge.LoadAssembly("InteropExamples.dll")
loFox = loBridge.CreateInstance("InteropExamples.Examples")

So instead of CREATEOBJECT("ComProgId") using wwDotnetBridge involves calling loBridge.CreateInstance("dotnetnamespace.dotnetclass"). Once instantiated the object behaves the same way as one returned from native COM Interop. CreateInstance also allows instantiation .NET classes that have parameterized constructors - the parameters can be passed after the type name of CreateInstance.

Once the wwDotnetBridge instance has been instantiated and any required assemblies have been loaded a bunch of additional functionality becomes available to access on .NET components. One of the most common things you might do with the native classes in the .NET runtime is to access static methods.

For example the following uses the .NET EventLog component to write an entry into the Windows Event Log:

loBridge = CreateObject("wwDotNetBridge","V4")
 
lcSource = "FoxProEvents"
lcLogType = "Application"
 
IF !loBridge.Invokestaticmethod("System.Diagnostics.EventLog",;
                                "SourceExists","FoxProEvents")
    loBridge.Invokestaticmethod("System.Diagnostics.EventLog",;
                                "CreateEventSource",;
                                "FoxProEvents","Application")
ENDIF
 
*** Write out default message - Information
* public static void WriteEntry(string source, string message)
loBridge.Invokestaticmethod("System.Diagnostics.EventLog",;
                            "WriteEntry",lcSource,;
                            "Logging from FoxPro " + TRANSFORM(DATETIME()) )  

This sort of thing was not possible directly with native .NET COM Interop since it only works with types exported to COM. Without wwDotnetBridge, using COM Interop often involved writing a custom .NET component that performed a few tasks like the above and then calling that component from FoxPro via COM interop. With wwDotnetBridge many of these simple tasks can be accomplished directly from within Visual FoxPro code, not requiring you to create any code in .NET. Not that it's a bad idea to create an intermediary .NET assembly for truly complex tasks - sometimes that can be vastly easier to write a little bit of .NET code IN .NET rather than trying to access it all from FoxPro, but for many things that are built into the .NET framework that are abstracted enough it's simply no longer necessary to have to create a separate component.

I don't want to show too many examples here since I covered a ton of them in the wwDotnetBridge White Paper. If you want to see more examples of what wwDotnetBridge can do and how it improves upon native COM Interop the white paper is a great place to start.

Why now?

wwDotnetBridge has been around for quite a while since 2007 and has been a part of West Wind Web Connection and West Wind Client Tools ever since. However, it hasn't exactly gotten the attention I thought it would get,  primarily because it's been buried inside the many fold functionality of these two very rich products. I'm hoping by releasing it as a free standalone component it will show up on more people's radar as they need to expand the reach of their FoxPro applications into new functionality available through .NET.

I created wwDotnetBridge because I had several applications that needed to interface with .NET functionality and the COM registration issues were causing me major issues during installation on some machines. Additionally I continually ran into issues with not being able to access certain .NET objects and members from FoxPro code. After some thought I decided to create an interop library that would help with proxying values back and forth between .NET and FoxPro. What started as a small support library quickly turned into a comprehensive set of components that can fix up many common problem types that don't marshal from .NET FoxPro and back and the end result was wwDotnetBridge. Since then I've been using wwDotnetBridge for all of my COM Interop code from FoxPro.

One product in particular - the West Wind Web Service Proxy Generator for Visual FoxPro - relies heavily on the features of wwDotnetBridge and has also been a source of many, many improvements to the library as it generated a huge variety of different usage scenarios for result .NET types. Whenever there have been problems with specific types I've been able to either find workarounds to access them in FoxPro or - more often than not - provided helper objects that can automatically handle these problem types.

In any case, I've found wwDotnetBridge immensely useful in my FoxPro work, and I'm hoping it'll be useful to some of you as well. If you plan on using .NET functionality in your FoxPro applications, check out wwDotnetBridge - it'll make life a lot easier.


SouthwestFox Sessions Slides, Samples and Links

$
0
0

Finally had some time to take a breather to package up the materials from the Southwest Fox Conference last month. The conference was a lot of fun - good to see so many old developer friends again even if it's such a small group that's left - sigh. In any case, I did two sessions on .NET Interop with FoxPro and .NET which is probably not the most popular topic there is, but I thought it was worthwhile to do these in light of changes since I touched on these topics originally many years in the past.

In particular, .NET 4.0 simplifies many aspects of COM Interop. For ASP.NET especially the dynamic features of C# make it much easier to access COM Components without all the type library import problems that plagued older versions, where FoxPro's screwed up type library exporter that lacks support for object hierarchies and proper naming without major headaches.

The other change is use of wwDotnetBridge - which is now free and open sourced - that makes it much easier to access just about any .NET component directly from FoxPro without having to create an intermediate .NET assembly first. This opens up a lot of new reach for FoxPro in interacting with new technologies that are not easily available otherwise. Just yesterday I spent a few hours re-working the JSON serialization features in Web Connection and the Internet and Client Tools to use a reliable .NET library instead of the slow and inefficient logic that can be used in FoxPro with much faster and much more reliable plug-in replacement. There's so much that can be done to enhance FoxPro's features and reach that I hope more people will take a look at this to extend the life of their Web apps or to start dabbling with .NET without giving up FoxPro altogether. I only wish I had made this library open sourced a bit earlier…

Anyway… here are the two session white papers and slides and samples which are linked from them:

Calling .NET Components from Visual FoxPro using wwDotnetBridge

.NET is here to stay, and you can take advantage of the rich functionality in the .NET framework from Visual FoxPro. You can access code in the .NET framework as well as Microsoft, third party and your own .NET libraries. This article expands on my previous COM Interop articles and introduces the open source wwDotnetBridge library that lets you instantiate and interact with most .NET types directly from Visual FoxPro code. It's a great way to extend Visual FoxPro's reach as well as allowing FoxPro developers to create their own .NET components that can be interacted with from FoxPro.

Resources:

Calling FoxPro COM Components from ASP.NET Revisited

If you need to call FoxPro COM components from ASP.NET recent changes in .NET 4.0 have made this process a bit easier. This article expands on how to create FoxPro COM components in .NET and ASP.NET in particular by using more modern technologies like ASP.NET MVC and Web Services to call FoxPro COM Components taking advantage of the Dynamic language improvements in .NET that make it much easier to consume FoxPro COM components.

Resources:

Enjoy.

Using IIS Express with West Wind Web Connection

$
0
0

One of the complaints that I hear a lot of when it comes to Web development on Windows is that IIS configuration is difficult, especially on a local development machine. IIS is a pretty hefty application that doesn't install in Windows by default so there's a bit of effort involved in setting up IIS and adding the appropriate options to it. In Web Connection 5.65 and later we've added direct support for IIS Express as part of the installation process.

A couple of years ago Microsoft released IIS Express, which is a standalone, installable version of IIS, that's very compact and can be run from the Windows Command Line. Unlike the Cassini/Visual Studio Web Server that came before it, IIS Express is 95% feature compatible with the full version of IIS and is based on the same code base. It includes all the core functionality including features like most of the security protocols (Windows Auth, Basic Auth) and support for ISAPI extensions. But most importantly IIS Express can run without Administrative rights and it doesn't require extensive installation or configuration.

Here are some of the features of IIS Express:

  • Doesn't require a full installation of IIS.
  • Can run without the need for administrative privileges.
  • Is manually launched and shut down via Command Line - no Service running.
  • Doesn't expose any remote connections by default.
  • Runs on all Windows XP and later versions of Windows including Home and Starter editions.
  • Is a full featured implementation of the full IIS Server functionality.
  • Supports ISAPI operation, Windows and Basic Security (unlike the Cassini server)
  • Is a small downloadable package that installs quickly (<5 megs)
  • Works with Web Connection with the .NET Handler (WebConnectionModule.dll) and classic ISAPI (wc.dll)

In essence IIS Express includes all the features required to run just about any Web application, including Web Connection applications from a small standalone server.

Getting IIS Express

IIS Express is a simple and small downloadable package you can grab from Microsoft from this URL:

IIS Express 7.5 (Windows XP,2003)
http://www.microsoft.com/en-us/download/details.aspx?id=1038

IIS Express 8.0 (Windows Vista, 7, 2008, 8)
http://www.microsoft.com/en-us/download/details.aspx?id=34679

Additionally IIS Express also requires .NET 4.0. If you don't have it already installed (Windows 8 ships with it and many application use .NET 4.0). A typical install is about a 25 meg download. The Web Installer checks what's on your system and downloads only what it needs and it doesn't require a reboot.

NET 4.0 Download
http://www.microsoft.com/en-us/download/details.aspx?id=17851

Using IIS Express with Web Connection

IIS Express is a standalone executable that can be launched from the Windows command line. Web Connection includes some helpers that let you launch directly it from within FoxPro. The easiest way is from the Web Connection menu:

DO WCSTART

to start the Web Connection menu in the Visual FoxPro IDE. Then click on Web Connection | Start IIS Express Standalone Web Server:

WebConnectionMenu

This brings up a dialog that lets you select a path and port for the server:

IISExpressDialog

You can also launch IIS Express from FoxPro via a Web Connection Console command programmatically:

DOconsoleWITH"LAUNCHIISEXPRESS","c:\westwind\wconnect",8080

This means you can easily launch IIS Express under program control to point at your site, maybe on startup of your app when in debug mode or via some other metric (environment reset etc).

Once launched you can then access your Web site simply via this Url:

http://localhost:8080/

or to access a specific page:

http://localhost:8080/testpage.wwd

Note that IIS Express always launches as a root site, rather than in a virtual directory. So if you were testing a site like http://localhost/wconnect/default.htm in IIS before the new URL now will be a root site url at http://localhost:8080/default.htm. If you are using script map extensions and relative paths in your applications as you always should, this shouldn't affect your site's operation in any way. If it does cause problems, it's a good indicator that your URLs could use some adjustment to make them more portable and always work with relative paths (hint, hint).

Here's another tip: If you don't have any Web Server installed on your machine at all, you can just use port 80 for your site, which then lets you omit the port number on the URL. So your URL with port 80 simply becomes:

http://localhost/testpage.wwd

which is a little cleaner and easier to remember.

Configuration for IIS Express in Web Connection

When running IIS Express you'll want to use the .NET Handler for configuration. This is to ensure that your script installations are portable and can work out of any folder even if your app moves.

If you install Web Connection from scratch, or you use any of the Wizards to create a new Application, Process class or the Server Configuration Wizard, Web Connection offers a pre-configured option for IIS Express:

IIS Express Configuration

This option automatically configures the site for using the .NET Managed Handler for any script maps that are created. Specifically it creates entries like this in the web.config file:

<configuration><system.webServer><validation validateIntegratedModeConfiguration="false" /><handlers><!-- Web Connection Module in IIS 7 in Integrated Mode --><add name=".wc_wconnect-module" path="*.wc" verb="*" 
type="Westwind.WebConnection.WebConnectionHandler,WebConnectionModule" preCondition="integratedMode" /><add name=".wcsx_wconnect-module" path="*.wcsx" verb="*"
type="Westwind.WebConnection.WebConnectionHandler,WebConnectionModule" preCondition="integratedMode" />
… </handlers> </system.webServer></configuration>

These are mappings to the Web Connection .NET Handler that are portable and allow the scriptmaps to be valid even if the site is moved to a new location on disk or on another machine. IIS 7 and later uses .NET natively so using the .NET handler is very efficient and additionally provides many performance and administration enhancements over the classic ISAPI module. When running IIS 7 and later in general we recommend using the .NET handler.

Administration - where does the Configuration come from?

IIS Express is a local Web Server and gets all of its settings from configuration files stored in your user profile, rather than in any system store that might require administrative rights. When launching IIS Express from Web Connection as described above, Web Connection uses an Application.config template from <WWWC install folder>\Visual Studio\IIS Express\ApplicationHost.config. This configuration file holds the site configuration for the IIS Express instance. This template is customized with the path and port specified and then written out to your temp folder as IISExpress_WebConnection_Application.config.

Once the server is running you can find all running instances of IIS Express in your Windows Task Tray:

TaskTray

You can right-click on the IIS Express icon and then get a list of all the running IIS Express instances (although you should run only one for Web Connection!):

IISExpressInstances

This shows the path and port and start url for the IIS Express instance and also the location of the temporary applicationhost.config file that's actually used. Remember that this file is temporary and created each time Web Connection launches an IIS Express instance, so if you need to make changes to theconfiguration, make it in the file located in the \VisualStudio\IIS Express\ApplicationHost.config, which is then applied to all IIS Express instances launched afterwards.

Shutting down IIS Express

As you saw in the previous image you can shut down IIS Express via the Stop All button. You can also right click any of the sites and click on Stop to shut each down individually. Once shut down they are gone and need to be explicitly restarted manually via the start options shown earlier.

IIS Express vs full IIS?

IIS Express is a nice addition to the supported Web Servers in Web Connection. I'm only sorry I waited so long to add this functionality until now - this could have been available at least a year ago. All of this is coming as part of an effort to revamp the configuration and deployment features of Web Connection which have been worked on in the last two versions of Web Connection. Web Connection 5.64 and 5.65 have added much better support for the Web Connection Module (which is recommended for IIS Express) and support for root Web site configuration. You will see more of these types of improvements in the future…

Does this mean you should not use full IIS on your machine to develop Web Connection application? Perhaps - IIS Express certainly makes getting up and running much easier. But - personally I still like to run with a full version of IIS and I recommend if you have access to IIS and can run in an administrative environment, that at some point at least you install and test your application in a full version of IIS. This lets you see operation in the full IIS version as well administer the server using the IIS Administration tools that you will also see on the server.

The good news is that if you configure a site to run with IIS Express moving to a full version of IIS is pretty easy - mainly you'll need to configure the Web Site and/or virtual directory for operation with IIS. This step can be automated with the Web Connection Server Configuration Wizard or you can manually do it.

wwDotnetBridge and .NET Versions

$
0
0

Judging from questions on the message board and private support issues that I've been debugging with customers, there's some confusion on how wwDotnetBridge works with various .NET versions. In this post I discuss how wwDotnetBridge loads the .NET Runtime and how .NET versions are managed by the hosted  runtime.

In case you haven't heard about wwDotnetBridge, it is an open source library to make it easier to interact with .NET components by providing much richer access to .NET functionality than is possible through regular COM Interop.

You can find out more about wwDotnetBridge here:

How wwDotnetBridge loads the .NET Runtime

One of the main features of wwDotnetBridge is that it hosts the .NET Runtime directly inside of Visual FoxPro. By doing so it's possible to bypass the COM registration requirements for .NET COM Interop, which is both cumbersome and seriously limits of what .NET components you can access with standard COM Interop. wwDotnetBridge hosts the .NET Runtime explicitly and loads wwDotnetBridge into it, which allows loading of arbitrary .NET components directly and offering access to a much richer feature set and most .NET components.

In more detail here is how it works:

wwDotnetBridge hosts the .NET Runtime manually inside your Visual FoxPro IDE or compiled FoxPro EXE process. It does this by using some Windows APIs - CorBindRuntimeEx() specifically - to create a Runtime host instance and loading an AppDomain into it. The Windows API returns an instance to the .NET Runtime Host via COM and passes back a reference to the host and AppDomain. wwDotnetBridge's loader then loads the wwDotnetBridge .NET component into the appdomain and retrieves the reference to this .NET object reference over COM and passes it back to FoxPro.

This instance is then stored on the wwDotnetBridge::oDotnetBridge property, which hangs on to that instance. The instance can then be used to load assemblies, creating .NET object instances (without COM registration) and perform all the other rich features that are available on the wwDotnetBridge object and most of these tasks are wrapped by the FoxPro wwDotnetBridge instance.

Although, the .NET component was launched using COM and acts basically like any other .NET COM object once returned to Visual FoxPro, the component did not have to be registered in the registry. Rather wwDotnetBridge loads components from within the .NET Runtime based on the component's name. This makes it possible to launch just about any .NET component directly, regardless of whether it's registered for COM interop or not.

Here's a graph that shows the general flow of of the loader process:

wwdotnetBridgeArchitecture

The end result of this process is that you end up with the wwDotnetBridge FoxPro class that lets you instantiate any .NET component - including those that aren't registered through COM - directly from Visual FoxPro:

do wwDotNetBridge  && Load library
loBridge = CreateObject("wwDotNetBridge","V4")
loBridge.LoadAssembly("bin\InteropExamples.dll")

*** Create a .NET object instance
loFox = loBridge.CreateInstance("InteropExamples.Examples")*** Call a method on the .NET object and return another .NET object loPerson = loFox.GetNewPerson() ? loPerson.FirstName ...

You can find out more about what you can do with this functionality in the white paper. In this post the focus is the .NET Runtime loading and how to manage the .NET version loaded.

Understanding .NET Runtime Loading

One key aspect to about .NET Runtime loading to understand is that only one instance of the .NET Runtime can be active in a process at any given point in time. Although wwDotnetBridge allows you to explicitly specify the runtime version to load, only the first load of the .NET actually loads an instance of the Runtime. All subsequent loads simply use the already loaded instance of the .NET Runtime that exists in memory.

This means if you do something like the following:

loBridge = CreateObject("wwDotNetBridge","V4")
? loBridge.GetDotnetVersion()
loBridge2 = CreateObject("wwDotNetBridge","V2")
? loBridge2.GetDotnetVersion()

both instances actually get Version 4.0 instances of the .NET Runtime. The results from GetDotnetVersion() in both cases returns:

.NET Version: 4.0.30319.18033
file:///C:/WWAPPS/WC3/WWDOTNETBRIDGE.DLL

This behavior has caused some confusion to some developers, as they expect to get different versions of the runtime for each of those commands.

It bears repeating: Only one instance of the .NET Runtime will be loaded - the first instance loaded is the instance that all wwDotnetBridge code will run under.

Runtime Compatibility

It's important to understand that .NET 2.0 components (that is .NET 2, 3 and 3.5 components all of which use the .NET 2.0 Runtime) are forward compatible and can run in .NET 4.0. You can easily load .NET 1.x and 2.0 components in a .NET 4.0 Runtime instance. The reverse however is not true even if the .NET 4.0 compiled component only uses .NET 2.0 code. .NET 4.0 compiled assemblies will not load in .NET 2.0 Runtimes.

SideEffects

As you might expect, this behavior can potentially be confusing or cause some problems if you're not careful about which Runtime gets loaded first. Essentially you need to ensure that the highest version of .NET that you expect to use gets loaded first, so that all other components will then also use this highest version.

If you have a component that expects Version 4.0, but somewhere along the line the .NET 2.0 Runtime was loaded (with the "V2" switch explicitly set), the version 4.0 component will not be able to load.

This can be especially tricky if you use .NET as part of reusable components that internally load up instances of wwDotnetBridge. Examples of this in Web Connection and Client Tools are wwSmtp and wwJsonSerializer, both of which load up wwDotnetBridge internally where you can't control the .NET runtime version.

Internally these components use GetwwDotnetBridge() - which is a helper function in wwdotnetbridge.prg - that provides a cached instance of wwdotnetbridge and tries to load the highest version of .NET installed on the machine. This function scours the the .NET install folder and tries to find the latest version of .NET installed and then uses that to load the .NET runtime. Using GetwwDotnetBridge() is a good idea for your own applications as it is a simple way to minimize load time for wwdotnetbridge and share a single instance of wwdotnetbridge.

In most cases GetwwDotnetBridge() does the right thing by finding the highest version installed and using it. For this reason we recommend that you use this function - especially if you expose wwDotnetBridge in other components where the calling application may not be able to explicitly set the .NET version. By using GetwwDotnetBridge everything uses the same logic to find the same runtime version which should ensure there's no confusion over which version is used.

But using this helper does not by itself guarantee that you get the right runtime - it only guarantees you get the latest version. But it won't prevent problems if somewhere in your application another component or your own code explicitly creates wwDotnetBridge with another explicit .NET Runtime version, before your call to load the runtime.

Making sure you load the right .NET Runtime

Ok - so getting the right version can potentially suck, especially when you might have multiple components that also load wwdotnetbridge!

However, there's a simple trick you can use to make sure your application always gets the right version of the runtime and under your control and not some random component's:

In your applications startup code force to load wwDotnetBridge. Do it as part of the Application's initialization code and simply do:

loBridge = CreateObject("wwDotNetBridge","V4")

to force the .NET 4 runtime, or:

loBridge = CreateObject("wwDotNetBridge","V2")

to force use of the .NET 2 runtime.

You should specify the highest runtime that any of your application's .NET Interop requires. So if you'll have any .NET 4 components in your app, make sure you specify "V4". Otherwise specify "V2".

This works because the .NET Runtime loads only once - any other component that explicitly requests to load with V2 after you've specified V4 still gets the V4 .NET Runtime.

Summary

wwDotnetBridge can only load a single version of the .NET Runtime and the first load wins - all subsequent loads will use the same runtime. To make sure you don't get a version too low because one component explicitly loads a lower version, take proactive steps and explicitly create an instance of wwDotnetBridge right at application startup with the highest version of .NET that you expect to use. Since .NET is backward compatible old components built for an older version will run, while your latest and greatest components can take advantage of the newest .NET version.

Get to it!

Web Connection Tip: Virtual Path Resolving with ~/

$
0
0

Getting paths to resources and media content right in Web applications is a crucial part of Web development. In general you never ever want to hard code a path to an internal site link or resource within your application. Whenever possible any page code that creates HREF links or links to CSS, Images or scripts should either be relative (ie. ../image.png) or relative to the root site.

Relative linking is native to HTML and CSS and if possible you should use it. Relative paths are relative to the currently active URL and so allow you to always keep relative locations in sync. If the entire application moves to a new folder the links still work because they're relative to each other.

Relative paths can break down when you move a page to a different location within a site. If you have a page that has relative links to an image in a subfolder (like images/refresh.png) and you then move this page up one level in the folder hierarchy it no longer finds the image in an images subfolder so either the link needs to be adjusted or your link breaks.

Relative paths are good, but you can't always use them - specifically from generic code that doesn't live on a page per se or that code that needs to run from different pages. Generally this happens in custom components or helper functions. Examples of these inside of Web Connection itself are Web Control Framework pages or helper functions like the wwHtmlHelper functions.

Enter Virtual Path Syntax ~/

Web Connection - much like ASP.NET - supports virtual path syntax which is as follows:

  • ~/page.wwd
  • ~/images/home.png
  • ~/admin/monitor/monitor.htm

This syntax provides virtual directory path relative links that are in effect application specific. Give a virtual directory of /wconnect the above paths evaluate to:

  • /wconnect/page.wwd
  • /wconnect/images/home.png
  • /wconnect/admin/monitor/monitor.htm

Where does it work?

You can use ~/ paths anywhere in Web Connection output when using the wwPageResponse class (which is the default for Web Connection 5.0 and later). When using this class all output that includes ~/ based paths are automatically fixed up. So regardless of whether you're using Response.Write() and ExpandTemplate(), ExpandScript() or a Web Control Framework page links are fixed up.

The following automatically convert to a full virtual path:

  • <a href="~/admin/admin.aspx">Admin Page</a>
  • <img src="~/css/images/home.png" />
  • <a href="~/">Home</a>
  • <div style="background-image: url(~/images/closebox.png)"></div>

So the first link would render as /wconnect/admin/admin.aspx etc.

How does it work?

Web Connection does this as part of the Response class Rendering process where it checks for text/html content and if it is searches for ~/ strings to replace. Specifically it looks for ="~/ and url(~/ in the output and replaces it. Internally Web Connection uses Process.cBaseUrl which provides the base virtual url for a virtual directory or root site. The value is filled from the YourApp.ini file and the VirtualPath property for your Process class:

[Wwdemo]
Datapath=C:\WWAPPS\WC3\wwDemo\
Htmlpagepath=c:\westwind\wconnect\
Virtualpath=/wconnect/

This value is read on application startup and then assigned to the Process.cUrlBasePath and is available anywhere. When wwPageResponse renders it does the replacement like so:

FUNCTION Render()LOCAL lcOutput, lcBasePath*** Fix up ~/ paths with UrlBasePathIFTHIS.contentType = "text/html" AND VARTYPE(Process) = "O"
   lcBasePath = Process.cUrlBasePathIF !EMPTY(lcBasePath)this.cOutput = STRTRAN(this.cOutput,[="~/],[="] + lcBasePath)this.cOutput = STRTRAN(this.cOutput,[url(~/],[url(] + lcBasePath)ENDIF
ENDIF
…

Resolving Virtual Path Syntax Programmatically

In your own code if you need to parse a virtual path you can use the same virtual path syntax with Process.ResolveUrl() method. As long as you are running with a Web Connection Process instance the top level PRIVATE Process object is in scope and you can call ResolveUrl() on it to resolve virtual relative paths.

So from within your FoxPro code you can easily do:

lcjQuery = Process.ResolveUrl("~/scripts/jquery.min.js")

to get /wconnect/scripts/jquery.min.js.

Paths are important!

This seems like a very simple thing and you might even write this off as, "ah I don't need this I just use relative paths". But I'm often surprised how often I see code that doesn't take this into account and ends up hardcoding virtual paths like /wconnect/css/images/home.png. You should NEVER write code like that either in HTML or in FoxPro codebehind. If in the future your app moves to a new virtual directory or - more commonly  - a root site links like that simply will no longer work and break your code.

Remember to always use relative paths - either page relative or virtual relative - to ensure that your site remains portable. It's an important lesson to remember if you've had to go in and fix up links once after the fact.

Publishing your Web Connection Web site from Visual Studio

$
0
0

Visual Studio 2012 (Update 2) and later includes the ability to publish Web Site projects to the Web server. Using either IIS WebDeploy or FTP, you can basically publish an entire Web site - all the HTML templates, HTML, CSS, Scripts and configuration files and even your executables if they exist as part of the live application.

The Web Publish feature is designed for ASP.NET or plain HTML projects, and is drop dead easy for those deployments, a few small adjustments have to be made to ensure smooth publishing for Web Connection projects. With a little forethought you can even make Web Deploy deploy both HTML and FoxPro executable content.

IIS Web Deploy

IIS Web Deploy is a Microsoft IIS related transfer protocol that allows you to publish files from a client machine to the Web Server. This is a two-way service protocol that has some smarts associated with it to know what's been published on the server and transfers only the content that has been changed. FTP deployment is also available but this deployment is generally less efficient and currently at least doesn't support individual file publishing.

If you're going to use this feature - Web Deploy is the preferred choice.

To run Web Deploy you have to install it on the IIS Web Server. The latest version is Web Deploy 3.5 which can be installed using the IIS Web Platform Installer.

WebPlatformInstallerWebDeploy

The latest version - unlike older version - is completely self-configuring and once installed is ready to go. Older versions required multiple steps and service startup, but the current version once installed just works.

If you can't install Web Deploy you can also use FTP to publish. The rest of this tutorial assumes Web Deploy, but most features work the same except configuration, and as of VS 2012 Update 2 individual file publishing was supported only with Web Deploy.

What can you publish?

Before digging in on Web Publishing works, let's discuss how Web Deploy fits with Web Connection. Web Deploy is typically meant for ASP.NET based Web sites. Web Connection doesn't quite fit that profile, but you can still use Web Deploy to publish all the Web related files of a project. This means you can publish all your HTML templates (ie. your script mapped templates), all static files (HTML, CSS, JS, Images etc). For Web Sites projects Web Deploy basically copies ALL files in your folders to the server. Exceptions are the Web Deploy related files, and user specific configuration files.

If you decide to hold your executables in a subfolder of the Web folder (like the Web Connection.deploy folder for example), you can also publish an executable  this way. But you have to be careful - if the executable is running and locked the publish will fail at that point and not update files based on the locked file failure. In order to copy executables you'll likely have to stop the application before updating.

Using Web Deploy with a Web Connection Project

By default Web Connection uses what is known as a Web Site project in Visual Studio. A Web Site project is a free file project which means that there's no explicit project file, but rather the project is simply a directory with files in it. This works well for Web Connection because in although Web Connection simulates ASP.NET applications, it's not really an ASP.NET application.

To publish files you can simply go the Site's root node and right click, and select Publish Web Site:

WebPublishMenu

Here I'm on the TimeTrakker project and then selecting the Publish Web site option from the context menu.

Create Publish Profile
The first thing you need to do is create a Publish Profile where you specify where to copy files to on the server. To do this click the drop down on the publish dialog and select <New Profile> (note: the pic below shows an existing profile of mine in the dropdown- for a new one it'll be blank).

NewPublishProfile

In the dialog that pops up give it a name and press enter. I'm going to use FoxMobile for my project, and I'm going to install it as a virtual directory underneath a TimeTrakkerFox Web site on my server.

Connection Properties
Next you're presented with connection properties. Here you specify the root Web Server URL and a site name:

WebDeployConnectionDialog

The Server is the root Web server name or IP address. If you have multiple sites any site will work - this is just to connect to IIS and connect with Web Deploy on the server. The Web Deploy client then hits this url: http://yoursite.com/MSDEPLOYAGENTSERVICE).

The Site name is the name of the IIS site *as entered in the IIS Service Manager* and this is where files are copied to. If the site does not exist yet Web Deploy will crate it in its default configuration. I don't recommend this - pre-create your site and virtual folder and disk location for your files.

 

So if you look in IIS under the Sites node, you'll find the name of the site. If you're installing to the root of a Web site, just use the sitename. If you're installing into a virtual folder below a root site as I'm doing here provide the name of the virtual with a forwardslash after the site name:

IISSiteName/VirtualName

if you just publish to the site root the syntax is simpler:

IISSiteName

Finally provide your username and password and check the checkbox to remember your credentials. Next validate the connection - this is important. Click the validate button to ensure that the connection actually works before moving on. If you have a problem you'll get a reasonably useful error message here such as site doesn't exist or invalid credentials etc.

If the connection validates go ahead click Next and/or Publish and Visual Studio will now copy all your project files to the server.

What gets copied

Because Web Site Projects that Web Connection uses by default are based on simple file structure, Web Publish copies everything in all of your folders to the server. There are a few exceptions of Visual Studio work files, but otherwise everything gets copied.

web.config Transformations

One thing that is important when you publish to a live server is that your configuration setting for the local machine and the server might be different. If you're using the Web Connection module configuration settings are stored in the web.config file and you typically have a few values that are different between the local test environment and the live site.

Web Deploy includes a feature called config transformations that allow you basically create different configurations and based on that configuration apply some transformations to the configuration file. This allows you to customize the handful of keys that might be different between local and remote installs.

AFAIK this is the only way to really handle configuration files because at least on Web Site projects there's no easy way to exclude files from uploading. So the web.config file is always sent. Config transformations allow you to customize the settings however.

To create a config transformation go to App_Data/PublishProfiles/YourProject.pubxml in your project:

ConfigTransform

This enables config transforms in the project and adds a new web.debug.config file to the project. This file is the transformation file. 'Debug' refers to the build configuration which by default is 'Debug' and which you can see in the top toolbar. You can create additional build configurations, so it's possible to associate multiple deploy targets in the publishing configuration. For example, you can publish to staging and live servers with different configuration.

 

The transformation file uses XSL syntax to allow you to replace sections and values in the config file. Here's the one I use for the FoxMobile app:

<?xml version="1.0" encoding="utf-8"?><configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform"><system.web><compilation xdt:Transform="RemoveAttributes(debug)" /></system.web><webConnectionConfiguration><add key="ExeFile" value="d:\Web Sites\fox\FoxMobile.exe"   xdt:Transform="Replace" xdt:Locator="Match(key)" /><add key="ServerCount" value="1" xdt:Transform="Replace" xdt:Locator="Match(key)" /><add key="TempPath" value="d:\temp\wc\" xdt:Transform="Replace" xdt:Locator="Match(key)" /></webConnectionConfiguration></configuration>

 

A transformation file is basically a file that only has all the changes to the original. So web.config is the base file, and web.debug.config (where 'debug' is the name of the build configuration) contains all the overrides and changes specific to the live setup. This allows me to have two sets of configuration settings - one locally and on the remote site so that both sites can run.

With this in place I can now simply publish the entire site without having to worry about Web Deploy writing over my configuration file.

In the example above I only override a couple of the webconnectionConfiguration settings that are different between local and remote setup. All other settings are left alone. It's also possible to apply the Replace transform to the entire webConnectionConfiguration section by using the Replace transform without a locator. You can find much more detail on what you can do with Web.config transformations here:

http://msdn.microsoft.com/en-us/library/dd465326(VS.100).aspx

Publishing Individual Files

You can also publish individual files rather than the whole site. To do this highlight one or more files in the Web project and then choose Publish Files from the context menu:

PublishSelectedFiles

This is a quick and easy way to send up individual files to the server. Note that this option only works once you've set up a publish profile as described earlier. You don't have to publish a full project first, but you have to at least go through the Profile setup and validate steps for this option to work.

Unless Publish Web Site is too inclusive for you or you have a special need where you don't want to push everything up to the server just yet,  there should be little need for individual publishing of files over publishing the project. Remember that Web Deploy is smart and only publishes stuff that has actually changed, so a site publish tends to send only a few files after the initial publish. The advantage of using a full deploy is that you don't have to keep track of what's changed - the publish feature does that for you.

What about Binaries?

So far I've only talked about the Web site content. But you can also deploy binary content in the same way.

In recent versions of Web Connection we've provided a webConnection.deploy folder which was meant for just this purpose. You can copy your executables to this folder and if you do those files get published as well. The webconnection.deploy folder should be set up just like you would set up your main application folder, but because it lives under the Web site it can be deployed all at once along with the rest of the application.

Here's what the deploy folder looks like for the Time Trakker project:

DeployFolder

The root folder contains the executable and configuration files. Also wconnect.h is there (required for dynamic compilation) of dynamic pages.

You also optionally can add a Data folder for the first deploy to push application data in DBF files up to the server. But you'll want to remove those files from the deployment folder after the first deploy so that you don't overwrite the data captured on the server. Also be very careful if you try to run the application out of this folder locally as some files are auto-created. wwSession, wwWebRequestLog and foxUser and you don't want to end up pushing these large files to the server each time :-).

Locked Server

If you update the server when it's running the update will likely fail because the EXE server is actually running and the EXE file on disk can't be updated. So in order for subsequent deploys to work (only when the EXE has changed) you'll have to stop the EXE. If you're running in COM mode, or you're running the Web Connection .NET Module and EXE based servers you can temporarily stop servers using the administration page.

For standalone EXE servers make sure you have the following key set:

 

<webConnectionConfiguration><add key="ExeFile" value="~\WebConnection.deploy\FoxMobile.exe"   xdt:Transform="Replace" xdt:Locator="Match(key)" /><add key="ServerCount" value="1" xdt:Transform="Replace" xdt:Locator="Match(key)" /></webConnectionConfiguration>

This points Web Connection at where the EXE file lives and allows it to remote start and stop the server, assuming your service account has rights to start and stop processes.

In file mode use:

UnloadFileServers

The process is:

  • Unload File Servers on Admin page
  • Run the Web Publish operation from Visual Studio
  • Load File Servers on Admin page

The Unload File Servers option unloads all of the running EXE servers by killing them. After unload run the publish operation. When that's done you can hit Load File Servers to start the servers back up.

For COM operation you can click on the Hold Requests | Switch option to flip the server into ON HOLD mode:

ServersOnHold

The process is:

  • Click on Hold Requests Switch to show Servers ON HOLD
  • Run the Web Publish operation from Visual Studio
  • Clock on Hold Requests Switch to show Servers running

 

Clearly, once you add executables to the mix things get a little more complicated with publishing due to FoxPro's specific needs of updating executable files. You may even have to resort to this same mechanism if you're updating only FXP/APP files as well as these files can also get locked at times.

Strategy

Web Publish makes it almost too easy to publish so you'll want to decide when it's good to push a full deploy to the server. Typically you can work locally until you got everything right then push to the server.

If you do decide to deploy your EXE to the server using Web Deploy I'd recommend that you treat the webconnection.deploy folder as just that - a deployment folder with the files in that folder only changing when you actually want to deploy a new EXE, rather than building your EXE directly into this folder. You don't want to be sending up a new EXE every time your recompile even if there are no changes in the EXE, especially since Web Connection EXEs tend to be at least 1meg in size just for the base features plus your application code.

 

So develop your app in some other folder, debug and tweak as needed and then when the EXE is ready, copy it into the webconnection.deploy folder for publishing.

Summary

The Visual Studio Web Publish feature is very nice, especially in recent versions of Visual Studio that have made it possible to use Web Publish for Web Connection projects. It's an easy way to very quickly and effectively copy your Web projects files - and potentially your executable files to the Web server more easily. Between quick publish operations for the individual files and full project publishing, it's one of the cleanest way to deploy your applications to a live Web site that beats manual FTP deploys.


Fixing wwDotnetBridge "Unable to load Clr Instance" Errors

$
0
0

In recent weeks a few people have run into the Cannot load CLR Instance error when trying to use the wwDotnetBridge class from Visual FoxPro. The cause for this problem is a security issue in Windows and some new security features in .NET 4.0 when running under User Account Control (UAC) in Windows.

The Problem

wwDotnetBridge is distributed as part of a simple .Zip archive - you download a .zip file and copy the files to your local machine. Unfortunately Windows recognizes that .zip files out of the Downloads folder contain 'downloaded' files and so adds some additional security flags onto files copied to the local machine. These security attributes cause a problem when trying to load the dll.

If you now try the simplest thing possible:

DO wwDotnetBridge
loBridge = CREATEOBJECT("wwDotnetBridge","V4")
? loBridge.GetDotnetVersion()

you'll find that you get an error Unable to load wwDotnetBridge: Unable to load Clr Instance. 0x80131515.

unabletoload

in this code:

*** Fail creation if the object couldn't be createdIFISNULL(this.Load())ERROR"Unable to load wwDotNetBridge: " + this.cErrorMsgRETURN .F.ENDIF

However, if you try to load "V2" of the runtime you'll find that the DLL loads just fine.

Unblocking the DLL

Luckily the solution to this problem is generally quite simple. You simply need to unblock the DLL. To do so:

  • Open Explorer
  • Find wwDotnetBridge.dll
  • Right click and select Properties
  • Click the Unblock button

Here's what the dialog looks like:

UnblockwwDotnetBridge

Distributing your App and avoiding Unblocking

It would really suck if these steps had to be done with a distributed application. But fortunately this problem pops up only if the DLL originated from an Internet Download either as the file directly or as part of a zip that just copied the file to the hard disk. These specific steps of actions effectively mark the DLL as suspicious code which is the cause for the error.

For your final deployed applications however, this shouldn't be an issue. If you install your application using a Windows installer or other process that runs as an Administrative task (ie. permissions were elavated as they are for software installations typically) the DLL file gets properly installed and you will never see this issue pop up.

Unfortunately this is a bummer for development and people trying for the first time to use wwDotnetBridge because the component is shipped as part of a .Zip file that is likely downloaded. So a large chunk of developers are likely to see an issue with this.

Other load Problems

Note while Windows blocking likely is the most common problem, there can be other things that cause problems. Loading from network shares is problematic for example, unless you either provide custom policy settings or a configuration file setting. You can find out more in the helpfile on how to run wwDotnetBridge or any other .NET assemblies from a network location.

Remember me, Remember me, Remember me

I'm hoping that this blog post will make the issue at least a bit easier to search for, since the solution is typically relatively simple.

Here's a funny anecdote about this error: I had run into this problem before, and actually had spent the time to add some documentation in this regard into the help file. In fact, if you look at the wwDotnetBridge topic in the help file - in the remarks on the bottom -  you'll see this issue addressed along with a detailed link to more info. I however had forgotten about the whole issue, so last week I've been going back and forth with two developers who were running into this problem. Not only that but I couldn't duplicate it, because my copy of wwDotnetBridge.dll isn't copied from a downloaded .zip file and I typically run as an Administrator. Only when I looked up the error code and found out the security issue and reference to unblocking did I finally remember that I solved this problem before :-)

So, yes, writing it down again is probably a good idea.

wwFtp and WinInet Problems with IE 11

$
0
0

In the last few weeks I've gotten a few notes from various customers that wwFtp has started to break for them after upgrading to IE 11. It appears there's been a new bug introduced with IE 11 that causes failures on certain types of connections.

There's more info on this in this StackOverflow question:

http://stackoverflow.com/questions/19683291/wininet-from-ie-11-randomly-returns-error-12003-for-most-ftp-functions

Specifically it points at some reproducible problems.

So far all the problems I've heard reported are related to file uploads rather than downloads. And for uploads there are a few workaround available.

Setting the Scenario

So I set up a simple test to upload some files to my own FTP server and sure enough I can see the problem occurring with this code:

CLEARDO wwftpDO wwutilsLOCAL o as wwFtp
o=create("wwFTP")

? o.FTPConnect("www.west-wind.com","ricks",GetSystemPassword())
? o.FTPSendFileEx("c:\temp\geocrumbs.jpg","geocrumbs.jpg")
? o.FTPSendFileEx("c:\temp\iefullscreen.png","iefullscreen.png")
? o.FTPSendFileEx("c:\temp\iemetrononfullscreen.png","iemetrononfullscreen.png")
? o.cErrorMsg
o.FTPClose()RETURN

Here I'm using FtpSendFileEx() to upload multiple files on a single connection and when I run this the third FTPSendFileEx call ends up failing with a 12003 error (extended error) with an error message basically echoing back the transfer command.

200 Type set to I
226 Transfer OK

Oddly this looks like the transfer worked even though the error message shows, but examing the target folder on the server reveals that the file was not actually sent or at least not written over there.

So what can we do?

Disable lPassiveFtp

wwFtp by default uses Passive FTP as it's more flexible for going through firewalls and proxied HTTP connections. By default lPassiveFtp is .T. and so passive FTP is used. Passive FTP basically creates a connection does it's transfer and drops the connection and automatically reestablishes as needed. In the process the FTP connection might actually jump ports.

By setting:

o.lPassiveFtp = .F.

wwFtp uses Active connections which stay alive for the duration of the connection made and stays fixed on a single port.

Using Active Connections I had no problems with uploading many files.

Unfortunately, active connections do not always work and can be flaky with connection drop offs, but it entirely depends on the environment. In general I recommend using Passive as the default but in light of the current bug, using Active at least should be tried (and you should always expose that setting as an option in your application settings so you can easily switch modes.

Connect and disconnect for each Transfer

Another option for sending files is to simply not reuse connections and send files by opening, sending and closing the connection. Doing this is reliable but it'll add a little overhead especially if you're sending lots of files.

So this works too:

LOCAL o as wwFtp
o=create("wwFTP")
o.lPassiveFtp = .T.
? o.FTPConnect("www.west-wind.com","ricks",GetSystemPassword())
? o.FTPSendFileEx("c:\temp\geocrumbs.jpg","geocrumbs.jpg")
o.FTPClose()
? o.FTPConnect("www.west-wind.com","ricks",GetSystemPassword())
? o.FTPSendFileEx("c:\temp\iefullscreen.png","iefullscreen.png")
o.FTPClose()
? o.FTPConnect("www.west-wind.com","ricks",GetSystemPassword())
? o.FTPSendFileEx("c:\temp\iemetrononfullscreen.png","iemetrononfullscreen.png")
o.FTPClose()

Use FtpSendFileEx2

Some time ago I quietly added an wwFtp::FtpSendFileEx2() function to wwFtp. The original FtpSendEx() method is a very low level function that makes a bunch of internal API calls from within Visual FoxPro. It also adds some additional features like giving you the ability to get notified of chunks of data being sent.

Even prior to this IE 11 issue, I've found that on occasion when sending large numbers of files, FtpSendFileEx() would occasionally stall for no apparent reason. It was rare but enough of a problem to consider alternatives. The alternative was to build another routine - FtpSendFileEx2(), which provides the same functionality but calls one of the higher level WinInet functions (FtpPutFile()) which is basically a single command. It turns out that using FtpPutFile() under the hood is a lot more reliable than the various API streaming functions.

FptSendFileEx2() is parameter compatible with FtpSendFile() but - it doesn't support the update even calls the OnFtpBufferUpdate() to provide progress information as the file is sent off a single API call into WinInet.

LOCAL o as wwFtp
o=create("wwFTP")
? o.FTPConnect("www.west-wind.com","ricks",GetSystemPassword())
? o.FTPSendFileEx2("c:\temp\geocrumbs.jpg","geocrumbs.jpg")
? o.FTPSendFileEx2("c:\temp\iefullscreen.png","iefullscreen.png")
? o.FTPSendFileEx2("c:\temp\iemetrononfullscreen.png","iemetrononfullscreen.png")

o.FTPClose()

and that works reliably too.

I use FtpSendFileEx2() in Help Builder to upload HTML help files to a Web site, and that can be 1000s of files in a session - and it works without problems, so this is what I would recommend you use for uploading files in bulk.

What to use?

If you're uploading a bunch of smallish files a use FtpSendFileEx2() - you won't need progress info on a per file basis, but you can certainly handle intra file upload info. If you upload just one or two larger files and you need the OnFtpUpdate() API to provide progress info, use FtpSendFileEx() but just make sure you reconnect for each file upload.

Switching active/passive mode is just a quick fix that might help get you out of a bind, but as a long term solution I'd still recommend you use Passive mode as it's much more reliable.

Hopefully this will be a temporary issue that Microsoft addresses soon - this is turning out to be a major headache for some of my customers who've been calling in frantically asking to see if there's a solution to this problem. It sucks when an application that's been running for 10 years mysteriously breaks after a silly browser update.

Showing a Wait Message when submitting longer Requests

$
0
0

Here’s a question that comes up quite frequently: I’m running a backend request that takes a few seconds to run – how do I show a simple ‘wait’ message while this request runs?

There are a number of ways to approach this problem including some more complex async processing concepts that offload processing to a separate process to avoid tieing up Web Connection instances that are running. But, if your process is a one off process that rarely runs and doesn’t take a huge amount of time (my cutoff is usually in the 10-20 second range before I’d consider async operations), there’s a simple UI solution that might make waiting a little easier to deal with.

It’s the UI Stupid

The problem with long running submit requests is that when the user clicks a button to submit a form, they likely expect something to happen right away. If your app however, just sits around waiting for a response to come back, the user is very likely to think that something is wrong. The most likely response will be that they either wander off without waiting for completion, or in a typical UI will try to click the Submit button again. For the latter, where you had one small problem before, you now have multiple small problems that can very easily tie up your server because those requests are slow.

So a UI solution to this should address two things:

  1. Providing some sort of UI that lets the user know that something’s happening
  2. Prevents the user from clicking the submit button again

It turns out doing this is pretty simple. Let’s look at an example. The following is a simple form that has a submit button.

SubmitBasicForm

I’m using a Web Connection Web Control Framework button here, but the same rules apply if you’re using Web Connection templates or script and plain HTML controls – the WCF just makes the code more self-contained.

<form id="form1" runat="server"><ww:wwWebErrorDisplay runat="server" id="ErrorDisplay" /><div class="contentcontainer"><h2>Slow Submit Operation</h2><p>The following button click will take a few seconds to run.
            If your code doesn't do anything, the application will appear
            to simply be unresponsive but the existing form is still active
            daring the user to click the submit button again... and again...
            and again.</p>        <hr /><ww:wwWebButton runat="server" id="btnSubmit" Text="Submit to Server" CssClass="submitbutton"  Click="btnSubmit_Click"/></div></form>

On the server side the FoxPro code in the button simply waits for a few seconds and dumps a message to the ErrorDisplay control on the page.

FUNCTION btnSubmit_Click()WAIT WINDOW "Hold on - processing..." TIMEOUT 10 this.ErrorDisplay.ShowMessage("Waited for 10 seconds.")ENDFUNC|

This forces the server to essentially ‘hang’ for 10 seconds to simulate some long running task.

If you run this now by hitting the Submit button, you’ll see that the UI doesn’t change after the submit button is hit. The HTML DOM pointer changes to a spinning pointer, but otherwise the UI stays the same. The Submit button is still active so you can actually press it again, and if you do you generate another hit against the server.

Ok, so this UI works for quick requests that process in under 1 second, but for anything over a few seconds this becomes problematic.

Using JavaScript to show Status

The HTML DOM has a form submission event that fires when an HTML form is submitted. When you click the Submit button the form.submit() event is fired and we can capture this event to modify the browser UI by showing a progress dialog.

I’m going to be using the Web Connection client modal dialog control that makes this super easy. Modal dialogs show a black overlay on the page and overlay whatever content you apply it to ontop of this black overlay. The overlay basically prevents access to the underlying document – so you can’t click the submit button again – and provides nice visual feedback that the content is not accessible. You can then overlay some content of your choice.

Here I use a rotating image gif and some text. Here’s what it looks like:

ModalOverlay

The markup for this simple HTML looks like this and I simply add this to the bottom of the HTML page.

<div id="WaitDialog"   class="hidden" style="text-align: center"><img  src="css/images/loading_black.gif" /><div style="margin-top: 10px; color: white"><b>Please wait</b></div></div>

Note this uses the westwind.css stylesheet for the hidden CSS class and the loading_black.gif from the CSS folder.

Now to hook this all up we can use just a few lines of JavaScript:

<script src="scripts/jquery.min.js"></script><script src="scripts/ww.jquery.min.js"></script><script>$("#form1").submit(function() {
        $("#WaitDialog").modalDialog();
    });</script>

I’m loading jQUery and ww.jquery.js from the scripts folder. jQuery provides the base functionality for document parsing and ww.jquery.js provides the modal dialog feature (and a lot more modular functionality built-ontop of jQuery).

The .modalDialog() jQuery plug-in is attached to the element you want to display centered on the screen on top of the darkened background. Here I’m using the default modal dialog behavior without any options parameters, but there are a host of options available to customize the modal popup and you can see some more examples on the Web Connection sample site.

Summary

Using a modal overlay like this is a simple solution to longish but not overly long server side requests where you want to prevent users from just walking off or double clicking buttons. Capturing the form submission can also be even simpler – for example you could simply disable the submit button, or change the submit button text into something like “Processing…” and disabled rather than displaying the overlay which is a good idea for shorter requests where it might be jarring to have a modal dialog pop up and then disappear immediately.

UTC Time in FoxPro

$
0
0

Representing dates and times across timezones can be a challenge especially if you don’t lay out a plan up front on how to store dates consistently. The sneaky thing with date time management in larger applications and especially applications that live on the Web or are shared across locations, is that problems don’t usually show up until much later in the lifetime of the application. For FoxPro in particular it’s not natural to store dates in anything but local machine format as the language doesn’t support direct UTC formats so it’s very common to see FoxPro applications use local dates which is usually a bad idea.

Here’s why and how we can address these issues…

TimeZones and Offsets

Depending on where you are in the world your local time is defined by an offset from UTC (Coordinated Universal Time) time or the baseline zero time. As you know if you’ve ever talked to somebody half way across the world at a certain time of day, while you just got done with breakfast, they are getting ready to go to bed on the other side of the world. This is the timezone offset. If you build applications that deal with customers that enter data into a system from multiple locations then using local times becomes problematic.

The problem of local times is made worse by Daylight Savings time. Most of the world – especially those further away from the equator – have daylight savings time which is applied on different dates in different locations around the world. Usually it’s “spring forward” and “fall back” with time getting set one hour forward for the summer.  Some countries have it, others don’t, and surprisingly – some countries actually half hour DST offsets. You can see where this is going – dealing with dates from multiple locations around the world can get complicated fast.

UTC Dates

A generally accepted solution to this problem is to store date values using a single time format that is adjusted from local time. Typically this date format is UTC time – or zero offset time.

I often get questions about why you should – and you REALLY, REALLY should - store dates in UTC format. The simple reason is: Things change! You never know how the data that you are capturing today will be used in the future. Maybe today you’re using the data in one location but maybe in the future you have a Web application and you have multiple locations that access the data. Or you end up building a service for other people to consume your data. If your data is in local time, the data will be much less useful then being in a universal format.

While it’s possible to convert data later on.

In fact on this project I worked on the client insisted on going with local dates over my vehement protests. The argument almost always is that ‘hey, we just have one location – everybody’s running in this domain and we want to see data from the database in local time.’ Convenient – yes absolutely. A good idea: NEVER! In every application where this has come up for consideration it’s always caused a problem eventually. Yes you may not see this right away because when you choose to stick with local time it’s usually based on the assumption you’re staying with inputs from a single timezone. Over time, as applications age however, things change. Data is accessed in other ways, possibly from different applications or shared with other customers around the world. And all of a sudden you have a problem that you never thought would happen.

It’s a generally accept good software practice to store Date and Time values in a consistent format and the easiest way to do this is to use UTC dates. The idea is simple: All data that is persisted to a permanent store is turned into UTC dates and written out that way. Any data retrieved is converted into a local date explicitly – only for display purposes. For things like queries input dates that are locale specific are converted into UTC dates first before the query is applied. IOW, if you use a common date format there will be conversion, but typically only when accessing/querying the data from the UI.

Most software systems provide easy support for date conversions. In .NET for example, there’s a DateTime.UtcNow value you can use to get the UTC time and there FromUniversalTime() and ToUniversalTime() and ToLocal

FoxPro and Date and Time

FoxPro doesn’t make this easy because it can only represent dates in local time – that is the time that is current for the computer that the machine is running on. However, it’s quite common in other environments such as .NET and Java to always write out date time values as UTC time. UTC time is Zero time, Greenwich (England) time, Zulu time – whatever you want to call it, it’s the time that doesn’t have an offset.

At the very least if you need to interact with systems that use UTC time you’ll need to make FoxPro play nice in this space. But I would urge you to consider to ALWAYS use UTC time for applications. While it is definitely a little more work to deal with UTC deformatting it’s not that much effort as long as you realize that the only time you care about this is when you convert dates to and from the User interface. Internally all date operations can relatively easily be made with native UTC.

Some UTC functions for FoxPro

If you are using any of our West Wind products – West Wind Web Connection, West Wind Client Tools or Internet Protocols – you already have this functionality I’m going to describe below. It’s built in with two functions (contained in wwAPI.prg):

GetUtcTime(ltTime)
Gets the current UTC time, or converts a FoxPro local DateTime to a UTC time.

FromUtcTime(ltTime)
Converts a date in UTC time format to local time.

Additionally there’s also:

GetTimeZone()

Returns the current timezone offset from UTC for the local machine. This is useful if you DIDN’T use UTC dates and are later forced to adjust dates based on local time and calculating time offsets based on user options or external locale access (ie. over a service). Essentially what this allows you to do is calculate relative offsets between two timezones and calculate a time for a different timezone. This function is also used by GetUtcTime() and FromUtcTime().

 

If you’re wondering about the inconsistent naming – the original function that existed in the framework for years was GetUtcTime which simply returned the current UTC time. Then at a later point I added the functionality to arbitrarily convert any DateTime value to a UTC data, so the function name stayed.

So using the two Utc conversion functions you can do the following:

? "Timezone: " + TRANSFORM(GetTimeZone()) +  " minutes"
ltTime = DATETIME()
? "Current Time: ", ltTime
ltUtc = GetUtcTime(ltTime)
? "UTC Time: ", ltUtc
ltTime = FromUtcTime(ltUtc)
? "Back to local: ", ltTime

I’m currently in the PDT (Pacific Daylight Time) zone and I get:

  • 06/09/2014 07:45:36 PM  - current
  • 06/10/2014 02:45:36 AM  - UTC
  • 06/09/2014 07:45:36 PM  - back to current
  • 420   -  timezone offset in minutes (-7 hours)

Note that GetTimeZone() will change if you change your system timezone, but VFP doesn’t see the change until you restart. The GetTimeZone() value also seems backwards: It’s +420 for Portland Oregon (PDT)  and –600 for Sydney Australia, but that’s how the Windows API is actually returning it. Essentially you can add the GetTimeZone() value to a local date to get a UTC date.

Implementation

This is all nice and neat if you have West Wind tools, but what about the rest of you that don’t? Ok, here’s some code that provides this same functionality (or pretty close to it actually):

*** Code exists also in wwAPI of any West Wind Tools!*** SET PROCEDURE TO wwAPI Additive#define Testing .t.SETPROCEDURETO TimeZone additive#IF Testing
? "Timezone: " + TRANSFORM(GetTimeZone()) +  " minutes"
ltTime = DATETIME()
? "Current Time: ", ltTime
ltUtc = GetUtcTime(ltTime)
? "UTC Time: ", ltUtc
ltTime = FromUtcTime(ltUtc)
? "Back to local: ", ltTime#ENDIF*************************************************************************  GetUtcTime*******************************************  Function: Returns UTC time from local time***    Assume:***      Pass:***    Return:************************************************************************FUNCTION GetUtcTime(ltTime)IFEMPTY(ltTime)
    ltTime = DATETIME()ENDIF*** Adjust the timezone offsetRETURN ltTime + (GetTimeZone() * 60)    ENDFUNC*   GetUtcTime*************************************************************************  FromUtcTime*******************************************  Function: Returns local time from UTC Time***    Assume:***      Pass:***    Return:************************************************************************FUNCTION FromUtcTime(ltTime)RETURN ltTime - (GetTimeZone() * 60)ENDFUNC*   FromUtcTime************************************************************************FUNCTION GetTimeZone************************************  Function: Returns the TimeZone offset from GMT including***            daylight savings. Result is returned in minutes.************************************************************************PUBLIC __TimeZone*** Cache the timezone so this is fastIFVARTYPE(__TimeZone) = "N"RETURN __TimeZoneENDIFDECLARE integer GetTimeZoneInformation IN Win32API ;
   STRING @ TimeZoneStruct
lcTZ = SPACE(256)
lnDayLightSavings = GetTimeZoneInformation(@lcTZ)
lnOffset = CharToBin(SUBSTR(lcTZ,1,4),.T.)*** Subtract an hour if daylight savings is activeIF lnDaylightSavings = 2
   lnOffset = lnOffset - 60ENDIF
__TimeZone = lnOffsetRETURN lnOffSet************************************************************************FUNCTION CharToBin(lcBinString,llSigned)*******************************************  Function: Binary Numeric conversion routine. ***            Converts DWORD or Unsigned Integer string***            to Fox numeric integer value.***      Pass: lcBinString -  String that contains the binary data ***            llSigned    -  if .T. uses signed conversion***                           otherwise value is unsigned (DWORD)***    Return: Fox number************************************************************************LOCAL m.i, lnWord
lnWord = 0FOR m.i = 1 TOLEN(lcBinString)
 lnWord = lnWord + (ASC(SUBSTR(lcBinString, m.i, 1)) * (2 ^ (8 * (m.i - 1))))ENDFORIF llSigned AND lnWord > 0x80000000
  lnWord = lnWord - 1 - 0xFFFFFFFFENDIFRETURN lnWord

The code is pretty self-explanatory. GetTimeZone() makes a call to a Windows API function to retrieve the Timezone structure and then needs to do some binary conversion to peel out the timezone offset. The timezone value is cached so only the first call actually makes the API call for efficiency.

Again if you are already using any West Wind tools you won’t want to use this code as it’s already included, but if you don’t then these functions are feature compatible with the West Wind Versions.

Working with UTC Dates

Using UTC dates in your application is pretty straight forward. Your user interface captures dates and times as local datetime values as it always has, but when you actually write the data to the database you convert the date to UTC dates before writing them.

Note that you do have to be somewhat careful to ensure dates are always normalized. FoxPro has no concept of a date kind – strongly typed languages like .NET and Java actually treat dates as structures that contain additional information that identify the date type. So if you tried to convert a date to UTC that is already a UTC kind it won’t convert again and hose that object. Some languages are even better about his: JavaScript stores all dates as UTC dates, and only the string functions that convert or print dates actually convert the date to local time (by default). Other overloads allow getting the raw UTC dates out. This is actually an ideal case – you get safe date values and the language itself drives the common use case that dates are used as local dates at the application UI level.

Unfortunately in FoxPro there no safeguards for this situation as dates are always local dates with no built in way to convert. So it’s up to you to make sure that you know which format a date is stored in.

When running queries against the data on disk with dates input by users or from other sources that are in local date format, you first convert the input dates to UTC dates, then run your queries with the adjusted date values:

*** some date that comes from the UI
ltUserDate1 = DATETIME()
ltUserDate2 = DATETIME() - 3600 * 24 * 30
ltTo = GetUtcDate(ltUserDate1)
ltFrom = GetUtcDate(ltUserDate2)SELECT* FROM orders ;
  WHERE OrderDate >= ?ltFrom AND OrderDate <= ?ltTo
  into cursor Orders

Likewise if you write data to disk that was captured from user input you have to capture the local date and convert it. If you’re displaying value you have to convert them to local dates. If you’re using business objects, you can do this as part of the business object’s save operation which can automatically update dates as they are saved. Properties can have setters and getters that automatically convert dates to the right format.

Typically this will be a two step process – loading and saving.

For saving you might do:

*** Save Operation
loOrderBus = CREATEOBJECT("Order")
loOrderBus.New()
loOrder = loOrderBus.oDataIFEMPTY(loOrder.Entered)
    loOrder.Entered = GetUtcDate()       ELSE
    loOrder.Entered = GetUtcDate(loOrder.Entered)ENDIF

and for reading you would do the opposite:

*** Load Operation
loOrderBus = CREATEOBJECT("Order")
loOrderBus.Load(lcOrderId)
loOrder = loOrderBus.oDataIF !EMPTY(loOrder.Entered)
    loOrder.Entered = FromUtcDate(loOrder.Entered)ENDIF

If you’re using business objects like the above you can make this even more transparent by automatically doing these transformations right inside of the business object itself:

DEFINECLASS busOrder as wwBusinessFUNCTIONLoad(lcId)IF (!base.Load(lcId)RETURN .F.ENDIFthis.oData.Entered = FromUtcDate(this.oData.Entered)RETURN .T.ENDFUNCFUNCTIONSave()this.oData.Entered = GetUtcDate(this.oData.Entered)RETURN base.Save()ENDFUNCENDDEFINE

which makes the assumption that your user code deals with local timezones while the data saved is UTC.

Dated

Clearly all of this isn’t just totally transparent, even in languages that support UTC more easily there’s some effort involved to make this work. The main reason being that database – the storage mechanism in most cases doesn’t differentiate between dates either. FoxPro data doesn’t, neither does SQL Server. NoSQL solutions like Mongo do because they’re using JSON values which are ALWAYS UTC dates – you don’t get a choice (which in my opinion is the right way).

It’s not a totally transparent process, but it’s a good idea to do this nevertheless especially if you’re building applications that run on the Web or in other places where the applications are accessed from multiple locations – which is most applications these days. It’s worth the effort for peace of mind in the future and a good skill to learn as this is the norm for other platforms that are more date aware than FoxPro.

Creating .DLL Script maps with the Web Connection .NET Managed Module

$
0
0

On IIS 7 and later, Web Connection’s preferred connector interface is the .NET connector, which internally uses an ASP.NET HttpHandler implementation to interface between IIS and the Web Connection server. Functionality wise the .NET connector has the same feature set as the ISAPI one, plus some additional features, but the main reason it’s the preferred choice is because it’s much easier to set up with the default configuration of IIS, and – surprisingly it actually offers better performance and stability than the ISAPI implementation especially in COM mode.

The managed module also works with IISExpress out of the box, so in a development environment you can easily use a non-system level tiny IIS Express implementation that’s easy to run and configure vs. having to deal with the full IIS environment configuration (which can be a bit daunting).

Lots of wc.dll Apps still around

I do quite a bit of IIS configuration support these days – a lot of people are finally upgrading ancient servers to newer versions of Windows, typically Windows 2012 these days. One issue that has kept some people from updating to the .NET managed module has been, that especially older Web Connection applications, are accessing the raw wc.dll directly as part of the URL. I’ve long recommended this as a bad practice, because accessing the ISAPI DLL is both a potential configuration issue as IIS makes it hard to access DLLs directly and in some cases disallows it altogether. But nevertheless there are a lot of old applications out there that still use direct wc.dll links. Direct DLL links also make it much more difficult to deal with paths as you always have to reference a physical location for the dll – script maps are much more flexible as they can be called from anywhere so path dependencies just go away in many cases.

Using a *.dll Script Map with the .NET Managed Module

So what if you have a complex application and you can’t (or won’t) give up the wc.dll links in your application?

Today I was working with a customer through some configuration issues. We started discussing the application, and as I always do I recommend using the .NET module instead of ISAPI. We went over the improvements and why it’s great, and we both agreed that this would be great, but what about our .DLL extensions in the URL?

After some problems with the ISAPI configuration, I actually went ahead and set up the .NET Handler configuration to ascertain that things were working and running and sure enough with the .NET module things were just working. Since it worked with the module -in a flash of enlightenment - we decided to try to create a script map for .DLL and point it at the Web Connection .NET handler.

And lo and behold – the .dll Script mapping mapping worked perfectly!

It’s absolutely possible to create .DLL script map to a .NET Managed handler, which makes it possible to run existing Web Connection applications that use wc.dll directly on the .NET managed module.

Here’s what the relevant web.config settings look like (or you can use the IIS Admin interface to create this as well):

<configuration> 
<system.webServer><!-- use this on IIS 7.5 and later --><httpErrors existingResponse="PassThrough" /><!-- IIS 7 Script Map Configuration --><handlers><add name=".wc_wconnect-module" path="*.wc" verb="*"
type="Westwind.WebConnection.WebConnectionHandler,WebConnectionModule" preCondition="integratedMode"/><add name=".dll_wconnect-module" path="*.dll" verb="*"
type="Westwind.WebConnection.WebConnectionHandler,WebConnectionModule" preCondition="integratedMode"/>
</handlers></system.webServer></configuration>

When creating a script map to a .NET handler a  DLL script handler is just like any other script map. Using the mapping you can run requests like this:

http://localhost/wconnect/somepage.dll

Ok that looks weird and is probably not your typical use case, but it actually works. More interestingly though you can do this:

http://localhost/wconnect/wc.dll?_maintain~ShowStatus

When this page comes up you’ll see that it loaded the .NET Managed handler, even though we referenced wc.dll! For this to work – you’ll want to make sure you remove the physical wc.dll from disk and have the webconnectionmodule.dll in the bin folder of your site.

This means that if you have an existing application that used any wc.dll links directly you can now use them with the .NET handler. Additionally you can start to clean up your application to use scriptmaps instead of the DLL link in the first place.

Why you should use ScriptMaps instead wc.dll Links

As already mentioned ISAPI configuration is getting more and more tricky as IIS versions progress. IIS today mainly relies on .NET to handle extensibility to other services and interfaces and ISAPI is more of an deprecated prototocol. It’s still there but it’s an optionally installed feature. Further  if you are accessing a DLL directly using ISAPI (not through a scriptmap) you are directly accessing a binary file which is generally discouraged. In order for this to work you have to explicitly enable generic ISAPI extensions in the IIS configuration.

Scriptmaps are simply a nicer way to write URLs. Instead of the ugly ~ syntax of:

wc.dll?ProcessClass~Method~&Action=Edit

you can use scriptmap to method mapping:

Method.sm?Action=Edit

where sm is a scriptmap, and method is the method of the process class that sm is mapped to. The URLs are much cleaner and easier for users to parse and understand.

Finally script maps allow you to simplify relative paths. Because a script map is not referencing a physical file in a specific folder like wc.dll, you can reference a scriptmap from any location and have it behave the same way. This means if you create a page in a subdirectory and you want to access the scriptmapped page you can use the same path and it will work. IOW:

Method.sm?Action=Edit
Admin\Method.sm?Action=Edit
Account
\Method.sm?Action=Edit

all are treated exactly the same. The only difference is that the script path passed as part of the server variables will point to a different location, so Request.GetPhysicalPath() will point at the site relative physical path on disk. Otherwise each of those three commands are identical.

Scriptmaps are the way to go!

Scriptmaps make life easier and once again I urge you, if you’re not using them to think about integrating them in your Web Connection applications. I suspect sometime in the not too distant future IIS will stop supporting direct access .DLL links and will force all operation to occur against script maps. Plus the easier usage for referencing dynamic links make code much more portable across sites or virtual directories if your development and live environment aren’t 100% identical.

Viewing all 133 articles
Browse latest View live