Oct 14

Creating a Healthcare.gov Web Site that Works

Healthcare.gov

HealthcareBackground on the Healthcare.gov Technology Problems

My blog post on October 1st, Healthcare.gov is a Technological Disaster, described my experience using the Affordable Care Act’s Healthcare.gov website on its first day. Based on my experience, I could tell the Obamacare site was a technical mess independent of the number of users on the system. However, “Too many users” was a convenient and politically positive excuse the contractors and administration attributed to the failure of the website. It took about a week before the press began to understand that wasn’t the case. Because of my blog post, I ended up in a New York Times article, the national broadcasts of CBS, CNN and Fox News, and a few radio shows (links are in my prior blog post). I was also invited by the House Energy and Commerce Committee staffers on Capitol Hill to meet them to help them better understand what was and wasn’t working and why.

Solutions for Improving Healthcare.gov

I dislike complainers who don’t offer solutions, so I feel obligated to offer some constructive ideas for creating a functional Healthcare.gov website. I’m not being paid to work on the Healthcare.gov web site and don’t have the actual specifications of what needs to be created, but based on my experience building database solutions and understanding load balancing, I offer the following ideas to consider.

Understanding the Buying Process for Health Insurance

It’s important to understand what the web site should do. The primary mistake the designers of the system made was assuming that people would visit the web site, step through the process, see their subsidy, review the options, and select “buy” a policy. That is NOT how the buying process works. It’s not the way people use Amazon.com, a bank mortgage site, or other insurance pricing sites for life, auto or homeowner policies. People want to know their options and prices before making a purchase decision, often want to discuss it with others, and take days to be comfortable making a decision. Especially when the deadline is months away. What’s the rush?

The existing process acts as if a retail website asked for your credit card number before showing what you could buy and their prices. Almost all sites let you browse without creating a user name. Retailers want you to see what’s available as quickly and easily as possible. People often visit multiple times before buying. Only after making a purchase decision should personal information be collected to complete the transaction.

The web site needs to reflect this and support a more common buying process.

Conceptual Overview

Here’s an overview showing three distinct processes that flow into each other (or people buy a policy at their step and leave the system). A critical part is offering a comparison matrix at each level so consumers can quickly see the differences between the insurance policies.

Healthcare.gov Redesign
(click for larger image)

  1. The first one gives policy options and non-subsidized quotes. People can click to purchase the policy from the insurance company. If so, they leave Healthcare.gov and the government is no longer involved.
  2. The second provides a subsidy estimate and uses the same display as the first but with and without subsidized prices. People can also click to buy the policy without a subsidy and leave the system, or they can officially apply for a subsidy.
  3. The third is the actual application for the subsidy and the only path which collects Personally Identifiable Information (PII). Higher security is necessary for this.

The first two do not require PII and would not require high security. That means a commercial cloud service such as Microsoft Azure could be used to host the site and adjust to high traffic loads. It would support people shopping and browsing multiple times before buying without the need to invest in hardware or bandwidth.

With this improved design, only a small portion of the site’s traffic would be in the final subsidy application portion. That can be isolated with high security and for much lower volumes of users since people would only apply once. Hassling people at this stage with lots of personal questions is acceptable since people are serious about purchasing.

User Experience Goals

These are some objectives for creating a great user experience:

  • Quickly get the unsubsidized insurance rate quotes and policies (no login required)
  • Easily compare among insurance policies based on features and price
  • Easily select and subscribe with an insurance company without a subsidy
  • Quickly receive an estimate of a subsidy without having to provide personally identifiable, confidential information
  • Easily compare among insurance policies based on features and subsidized prices
  • Do not ask unnecessary questions such as race that don’t impact plans, prices, or subsidies
  • Formally apply for the subsidy (login and personal information required)
  • Select a subsidized policy and pass the appropriate information so the insurance company can validate the subscriber’s information and receive the subsidy
  • Once policy options are offered, allow users to create a login to save their inputs, and get back into the system to recover their work-in-progress. This would be required with the formal subsidy application but not necessary for the other options.

Technical “Back Office” Goals

  • Performance: The system should move people through the process as quickly as possible.
  • Collecting Information: It should not ask for any information that’s not required for generating the policy options and prices.
  • Fewer Screens: Rather than having one screen per question, multiple questions should be asked in as few screens as possible. People know how to scroll. Extra screens should only be added if they depend on answers from previous screens.
  • Data Security: The first part of data security is to NOT collect sensitive information. Sensitive information should only be collected from people actually applying for the subsidy.
  • Data Integrity: All database changes need to be in transactions with commitments and rollback on failure. Situations where accounts are partially created with a valid user name and no account details should never occur.
  • No Other Connections During Data Entry: The system should not be connecting to other data sources while the user is entering data. Just collect the data.
  • Offline Processing: Once the user enters all their data for a subsidy quote, a separate system processes the applications and interfaces with the other systems to validate the data and calculate the subsidy. By separating this process from the user’s online experience, problems with connections to other systems do not impact the user.
  • Email Notification: Once a subsidy is calculated, an email is sent to the user inviting them to log into the system to see their options
  • Notification to Insurers: Web pages and web services to allow real-time views of the status of applications selecting the insurer’s policies.
  • Commercial Cloud Hosting: Using a commercial cloud platform would provide automatic scalability to meet fluctuating levels of users without having to make hardware purchases. By eliminating the need to collect and store sensitive user data for most of the website, commercial cloud hosting and its benefits are available without security concerns.

Oversight Goals

Management and interested parties should have system dashboards:

  • Real-time Displays: Monitor user progress with summary tables and graphs showing the status of people moving through different stages of the system.
  • Basic Business Intelligence: Summary and drill-down details by state, date, hour, etc.
  • System Transparency: Provide a public view of some data in a cached mode (updated daily or hourly, but not real-time).

Design Overview

Here is how the goals could be implemented for the Healthcare.gov web site:

  1. The initial form asks people to select their state. If the visitor is in a state that has their own system, ship them to those sites, otherwise proceed with the next step in the federal system
  2. Collect the information necessary to create the unsubsidized options. I was told there were five or so pieces of information necessary to generate the unsubsidized rates (e.g. gender, year of birth, family status, smoking status, etc.)
  3. Display the available plans with options to compare and filter them easily based on plan level (gold, silver, bronze, etc.), provider, price, etc. Should be similar to retail web sites like Best Buy or Staples showing different products and their features in a matrix comparison, with buttons to get more details and a button to select one to buy. One would expect users to come to this site multiple times over multiple days to learn about their options before making a purchase.
  4. An option to save the inputs. This would be the first time to create a simple account to collect user information (which does not include things like social security numbers, birthdates, or names). A simple user name (email address) and password, with a standard email confirmation that doesn’t have a time limit. This would allow users to get back to the previous screen without re-entering their data.
  5. An option to get a subsidized price estimate. If the person chooses this option, they create a simple account because highly sensitive information will not be collected. The account is simply to retrieve the user’s entries. The user provides the information necessary to calculate the prices without having to lookup data from government sources. The user can enter their values for income and whatever other factors impact generating a subsidy estimate. Just like bank web sites let you enter basic information to get a mortgage or car loan rate before you apply, Healthcare.gov should do the same. This would allow the site to create quotes quickly without having to bog down or wait for the other sites such as the IRS, Experian, etc. This minimizes the impact of too many users. Once the estimated subsidies are calculated, a display similar to #3 above would show the options.
  6. Finally, applying for the subsidy. Once someone decides they want a particular policy, they can officially apply for a subsidy. This is the first time personal data needs to be entered. The system should collect the data as quickly as possible without having to validate the information while the user is entering it. Once all the data is collected, the user is informed via email when the subsidy calculation is ready.
  7. A separate background process calculates the subsidy requests and looks up the necessary data from the different sources. If any of those linked systems is unavailable, it’s no big deal since it doesn’t impact the user on the web site. The user is already gone and waiting for an email. Once the calculation is generated (or if it couldn’t be generated), the user is notified via email and they can view the results by logging back into their account.

For management, there should be dashboards with tables and graphs showing what’s happening. No more excuses of not knowing how many people are in each phase of the process, how many have received quotes or enrolled, etc. For transparency, some of this information should be publicly available updated at least daily.

Conclusions

I’m not sure whether the people designing and developing the site will find these suggestions helpful. There’s obviously lots of details not included in my proposal, but I’m confident my basic design is a significant improvement over the original site. It would provide a better user experience, be much easier and faster to develop, easier to test, and more scalable and secure. Was it that tough to envision earlier?

Let’s remember, this website remains the automation of a paper form. It’s not as hard as providing healthcare.

Related Blog Posts:
Too Big to Fire: How Government Contractors on Healthcare.gov Maximize Profits
Healthcare.gov is a Technological Disaster
Media Coverage for Changing the National Discourse on Healthcare.gov
Testifying before the House Committee on Homeland Security about Healthcare.gov
Jul 09

Microsoft Access 2013 Features, Changes, Resources and Runtime Release

Microsoft Access 2013Microsoft Access 2013 Features

Microsoft has published a variety of resources to discuss the new features of Microsoft Access 2013.

Free Runtime Version Available

The Microsoft Access 2013 Runtime enables you to distribute Access 2013 applications to users who do not have the full version of Access 2013 installed on their computers. The Microsoft Access 2013 Runtime Download is available in 38 languages!

Note that due to the many deprecated features in Access 2013, we would recommend developers to stick to the Access 2010 runtime version unless you’re deploying to an environment that has already migrated to Office 2013.

Additional Resources

FMS Products for Microsoft Access 2013

We are busily enhancing our products for Access 2013. Access 2013 versions are already shipping for Total Visual Agent, Total Access Emailer, Total Access Speller, and Total Access Statistics.

Jul 03

Inspection Software for the National Archives and Records Administration (NARA)

National Archives and Records Administration (NARA)With the upcoming 4th of July celebrations, we at FMS are proud to have worked with the National Archives and Records Administration (NARA) over the past year to help them better maintain and preserve the important documents of our nation. Here’s what we did in our new case study, Inspection Software for the National Archives and Records Administration (NARA).

About the National Archives

The National Archives and Records Administration (NARA) is the record keeper for the United States. Of all documents and materials created in the course of business by the United States Federal government, only 1%-3% are important enough for legal or historical reasons that they are kept by NARA forever.

Natonal Archives Building in Washington DC

Background

To ensure the quality of work performed by their Facilities Management service providers, the National Archives and Records Administration performs both random and targeted inspections of completed work orders.

Problem

Inspection findings were documented on paper, which ironically, wasn’t efficient for the NARA. Reports were manually created to generate the service results. This manual process was time consuming and prone to human error.

Solution

FMS was selected to create a professional, multiuser system to collect the inspection results electronically and generate a variety of management reports.Within two months, we deployed our solution which offers data entry screens to replicate a variety of existing forms and many new management reports. An intuitive user interface made it easy for users without requiring extensive training. More importantly, we established a solid database foundation to improve NARA processes both today and into the future.

Operational Impact

  • Stores inspection results into a shared database
  • Increases efficiency and accuracy of the collection and reporting process
  • Gathers information and performs statistical analysis in ways that were previously not available
  • Eliminates the need to maintain paper files
Jul 01

Microsoft Access Inconsistent Compile Error for a Field Reference in a Form

Our Professional Solutions Group was recently asked to diagnose a Microsoft Access database experiencing recurring compile errors with code behind a form that looks like this:

If IsNull(Me.Comments) Then

where Comments is not a control on the form, but a field in the form’s RecordSource.

In general, this compiles and runs fine, but on seemingly random occasions while the program is running, it generates a compile error saying that that field was not found. But the field always existed on the form’s RecordSource, so why was this happening?

Solutions

There are a few ways to avoid this problem:

  • Change all the Me. to Me! which is the proper way to reference a field in VBA code, if there is no control bound to this field.
  • Create an invisible  text box that assigns its ControlSource to that field, give the text box a different name (e.g. txtComments), and reference the text box in code.
  • Deploy the database so its compiled state cannot be changed (ACCDE or MDE)

We prefer the use of the invisible text box so that we can reference the control name via the “Me.” syntax rather than “Me!”. The “Me.” syntax is verified when the code is compiled so that a typo with the control name is caught. This is preferable to a runtime error that gets triggered when the user encounters that line of code.

But Why?

Though we knew how to fix this, we were curious to understand why the compilation wasn’t consistent across users. It also didn’t fail when a specific event occurred. It seemed almost random when the compile error arose. And the form triggering the error seemed perfectly fine with a reference to a field that exists in its RecordSource.

The Real Cause for the Compile Error

Through our own research and help from our Microsoft Access MVP colleagues, we discovered that the compile error was tied to programmatically changing the RecordSource of a form. The change is not necessarily on the form where the compile error is triggered.

Microsoft Access seems to reset its internal list of field references some time after the RecordSource is modified, which triggers the compile error. This explains why some users experienced it and others did not since it depended on whether the user opened a form that changed its RecordSource. It also explained why the error didn’t occur immediately after a RecordSource was modified.

Special thanks to Dirk Goldgar for pointing this out. Hope you never encounter this!

Additional Resources for Database Compile and Field Reference Issues

For additional tips on Microsoft Access application development, visit our:
Microsoft Access Developer and VBA Programming Help Center

Jun 14

New Technical Support Forum and Ticket Tracking for FMS

We are very pleased to announce our new technical support site (http://support.fmsinc.com) to provide forums and the ability to submit technical support inquiries.

Our new site lets you submit requests and respond to them via email with our support team. It also lets you visit our site to check the status of your requests and their entire chain of communications. You can login directly or use affiliated logins from Facebook, Google, and Twitter.

You can also read information and ask questions to the community on topics related to Microsoft Access, Visual Studio, LightSwitch, and SQL Server. We hope you’ll join us.

Additional support resources are available here:

Jun 07

Comparison of Microsoft Access, LightSwitch and Visual Studio Platforms for Database Developers

Last month I spoke at the Portland Access Users Group Conference at Silver Falls State Park. I gave a presentation introducing Visual Studio LightSwitch and how it could be used for SQL Server applications deployed on a variety of platforms. As a follow-up, I’ve created a summary matrix and discussion that highlights the features and limitations of the variety of platforms from Microsoft Access, Visual Studio LightSwitch, and Visual Studio.


Microsoft Access started at the beginning of the Windows revolution 20+ years ago and became the most popular database of all time. More recently, additional technologies have become significant, so it behooves the Microsoft Access community to be aware of the trends and options.

Database Platform Matrix

Ultimately, it’s about being able to create solutions that help you and/or your users accomplish their mission. Sometimes the user’s platform is critical, sometimes, it’s the data source, and other times it’s the permissions you have to deploy a solution. A variety of platforms and options are available with benefits and limitations with each. Meanwhile, Microsoft Access is also evolving with their latest Access 2013 version offering new web based solutions.

We’ve written a new paper, Comparison of Microsoft Access, LightSwitch and Visual Studio Platforms for Database Developers  that summarizes what we’re seeing and experiencing.

Mar 13

Mistakenly Blaming Microsoft Access instead of the Developer

In this March 10th Washington Post article, Alexandria tailor weaves custom solution for taking orders, a local firm is mentioned having struggled with Microsoft Access and being forced to migrate to a new system due to problems with their Access database. In particular, their database couldn’t provide multiuser support and lost data when more than one person used it.

Unfortunately, stories like this perpetuate the myth that Microsoft Access features are limited rather than the lack of skills of the developer who tried to customize it. It’s a shame the business owner and developer weren’t aware that Access could address the multiuser issues they encountered; thereby saving time, money, and headaches from having to migrate to a new platform.

Microsoft Access is Multiuser Ready

Microsoft AccessThe reality is that Microsoft Access is fully capable of providing multiuser support if it’s designed properly. For basic database solutions with under 1GB of data (maximum 2GB), Access comfortably supports up to 200 simultaneous users with a properly designed solution.

Microsoft SQL ServerAs the number of users and data expands, Access makes it relatively easy to migrate the data storage from an Access database to SQL Server, while maintaing the application layer (forms, reports, code, etc.) in Access. This also lets you share the SQL Server data on web sites and other platforms. That means supporting two users in a tailor shop would be trivial with MS Access.

Split Database Architecture for Multiuser Solutions

People sometimes treat Access databases like Excel spreadsheets and want each user to open and close the same file. That’s not the way to support multiuser data sharing in Access. A split database architecture is needed to separate the application layer from the shared database. Each user gets their own copy of the front-end database application that links to the tables in the shared database.

Microsoft Access includes a built-in wizard to split the database and another wizard to link the front-end database to back-end tables. We wrote a paper about this years ago called, Microsoft Access Split Database Architecture to Support Multiuser Environments, Improve Performance, and Simplify Maintainability.

While having a web application has its role and advantages, the article mentions their internet connection isn’t reliable and their business is negatively impacted when that happens. That’s an unfortunate result of their new platform. There are ways to create hybrid solutions to provide on premises support with shared web solutions, so these issues need to be considered when creating business critical solutions.

Using Microsoft Access Strategically

Small businesses often have very limited budgets and time to understand technological options. Completely eliminating Office and Access as viable solutions for incorrect reasons is wasteful. Microsoft Access addresses an important segment of database needs, and offers small businesses and information workers the ability to make modifications and extensions that other platforms do not allow so easily. Understanding where and how to use Microsoft Access effectively along with its limitations offers organizations of all sizes a competitive advantage. We’ve helped many small businesses, non-profits, and multi-national companies properly use this technology very effectively. Here’s our article on Microsoft Access within an Organization’s Database Strategy that discusses this in more detail.

Conclusion

There are lots of terrible applications created on every technology platform whether it’s Microsoft Access, Excel, Visual Studio, Java, Oracle, SAP, etc. In this case, the skills of the Access developer were clearly lacking. Getting that confused with the technology is misguided.

Additional Resources

For additional resources to build robust Microsoft Access solutions and understand what’s possible, visit our:

Feb 18

Attending the Microsoft MVP Summit in Bellevue/Redmond, WA

I’m attending the annual Microsoft MVP Global Summit this week in Bellevue and Redmond, WA. This is actually my first experience at this event as I was awarded the MVP title this past summer for my support of Microsoft Access.

Over the years, FMS has had several Microsoft MVPs for Access including Dan Haught, Steve Clark, and Jim Ferguson who was one of the original MVPs when the program started 20 years ago. Book author Alison Balter and Portland Access User Group leader Jack Stockton join me as new MVPs this year. Last night we had a nice kickoff event with fellow Microsoft Access MVPs.

The MVP conference brings together 1500 professionals from across the world to this conference. The MVPs cover all the different product groups for Microsoft which offers a wonderful mix of expertise and enthusiasm. Over the next few days, the different Microsoft product groups will be providing presentations to attendees in an NDA environment. Sorry, I can’t share the content, but I can say it impacts our future planning.

Yesterday, they had a showcase of a variety of technologies from MVPs in the US plus companies from Taiwan, Japan, Germany, China, India, etc. It’s great to see the global impact of Microsoft.

How do you become an MVP? The usual path is to be involved in public forums answering questions and becoming an expert in the field. You don’t need to own a business to be an MVP. Other ways to be selected are to increase your professional visibility through products, writing books, blogs, etc. The MVP program is designed to recognize individuals who influence the market and help the community maximize the value of Microsoft products. So whether it’s XBox, Bing Maps, Dynamics, Exchange, Office 365, Excel, Outlook, PowerPoint, SharePoint, Visual Studio, Windows Phone, Word, etc., if you have a passion, expertise, and a willingness to share, the MVP community could be part of your future. Good luck!

Feb 12

Portland Oregon Microsoft Access User Conference

Silver Falls State Park, Oregon
May 4-6, 2013

FMS President Luke Chung is one of the featured speakers at this annual Microsoft Access conference hosted by the Portland Access User Group. This will be his third year speaking at this wonderful event.

Enjoy an amazing, rustic getaway to a beautiful state park with fellow Microsoft Access enthusiasts. Book early so you can stay at the limited number of cabins available at the conference center. The conference fees are amazing low and includes meals.

LightSwitch and Microsoft AccessLuke will participate in various talks on Microsoft Access development, running a business, and creating solutions using Visual Studio LightSwitch. He’ll also be staying at the site during the entire conference, so you’ll have plenty of opportunity to meet him formally and informally.

For more information, visit the conference site.

Feb 11

Migrating BlogEngine.NET Posts to WordPress with Translations Using Microsoft Access

Migrating Our Site from BlogEngine.NET

We could have started our new blog from scratch but since our existing blog existed for many years, we wanted to migrate it with all the comments from our BlogEngine.NET host to WordPress. That turned out to be a tricky process but we managed to do so. To help others who might be facing the same situation, here are the steps we followed so you don’t have to make the same mistakes we did:

Prepare the Existing Blogs for the Migration

The first step is to make sure your existing BlogEngine.NET blog is working properly and ready for export. One of the tricky and time-consuming parts of this is the reference to graphic files. BlogEngine stores its embedded graphics in its own structure using syntax similar to this (our blog was in the BLOG folder):

src=”/blog/image.axd?picture=banner.jpg”

Note that this only impacts graphics that were uploaded into BlogEngine. If you referenced images that already existing on your website, those references are fine and do not need to be modified.

To fix the image.axd? references and eliminate future dependencies, it’s best to store these graphics in your website explicitly. Once you save the graphic files, you can update your blogs to reference them. Saving the individual pictures is a manual process and you’ll need to decide where to store them on your website. You can then manually update the affected blog topics. Alternatively, you can do a search and replace later after exporting the blog’s XML file. We did a combination of both.

Export the existing BlogEngine.NET data to an XML file

Export the existing BlogEngine.NET data to an XML file. This is available as the last option under Settings from BlogEngine. The default name is BlogML.xml

Unfortunately, even if you fixed the picture image references, you’ll still need to translate the file to a format that WordPress can import. That requires making many changes. We actually exported the XML file, then parsed it to find the references to

src=”/blog/image.axd?

to identify any image references that were still in BlogEngine. That gave us the choice to either fix the original blog and re-export, or to fix it directly in the XML file.

Preparing WordPress

The WordPress import tools are under Tools, Import. To import the BlogML file, you need to install the appropriate WordPress PlugIn. The BlogML plugin that worked for us was BlogML-WordPress-Import.zip which can be found here. You’ll need administrator write rights to your WordPress folders to install this.

Before you modify the BlogML file, you may want to import it to see the problems that need to be addressed in WordPress. You can do so and trash them in WordPress without any harm.

Using Permalinks with Post Names

By default, WordPress saves and displays its posts by ID number in the URL. If you want posts to have more meaningful names which also helps with SEO, you should set the preference under Settings, Permalinks, and choose Post. We set this but the pages triggered a 404, Missing File problem.

We discovered that this translation didn’t work on our WordPress host (Windows using IIS) unless we added a web.config file in the root of the blog with this information:

<?xml version="1.0"?>
<configuration>
<system.webServer>
<rewrite>
<rules>
<rule name="Main Rule" stopProcessing="true">
url=".*" />
<conditions logicalGrouping="MatchAll">
<add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
<add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
</conditions>
<action type="Rewrite" url="index.php/{R:0}" />
</rule>
</rules>
</rewrite>
</system.webServer>
</configuration>

Translating the BlogXML file with Microsoft Access

Now that we established the foundation to import the XML file and display the posts with the proper Permalinks, we could see several things still need to be fixed. It was relative easy to do with multiple search and replace terms. We did this in Microsoft Access:

  • Create a table with two text fields. One for the Original value and one for the New value to replace it. We then populated the table with the terms to translate:
    • Hyperlink references. Since we migrated our blog from a subfolder (www.fmsinc.com/blog) to its own subdomain (blog.fmsinc.com), we needed to modify all the hyperlink references that were pointing to our web pages to explicitly point to our www.fmsinc.com web site. That meant, we needed to adjust our href=”/ syntax to “href=”http://www.fmsinc.com/”, so we added these two values to our table.
    • Existing Image references. Similarly, we needed to adjust our image src=”/ references for graphic files to “src=”http://www.fmsinc.com/”, so they were added. Note, we didn’t search for “img src”  because many references included style settings between the “img” and “src”.
    • New Image references. This is also the time to add any explicit image.axd? references to the new location of the graphics if you didn’t want to manually edit the original posts.
  • Total Visual SourceBookBlogEngine saves category names as GUIDs and references the GUIDs in each post. If you don’t translate these, they’ll be imported into WordPress with the GUID rather than readable category name. We used the CXMLSettings class from Total Visual SourceBook to read the categories section of the XML file so we could pair the GUID and category names.
  • Perform the Search and Replace
    Once the table contains all the terms to translate, we wrote a simple routine to read the XML file into a variable, then go through the table and use the VBA REPLACE function for each record. When we were finished, we wrote the text to a new XML file for WordPress to import.
  • From WordPress, import the new file using the BlogML import plugin.

Because we programmatically perform the translation process, it was easy to test, run, and refined the entire process when things didn’t work correctly. It took us a few iterations but we were pleasantly surprised how well the posts came across.

We found that we needed to manually touch up some of our posts. The HTML in WordPress doesn’t require the use of paragraph styles (<p> </p>) to define each paragraph and automatically strips them out. Unfortunately, it displays the line breaks in paragraphs which is normally ignored in HTML syntax. We had to manually edit and delete those so the posts properly word-wrapped.