Friday, May 16, 2008

SQL2008: Solving the file system vs. database BLOB quandary

I found a recent post on The Data Platform Insider blog very interesting:
One of the most exciting new features in SQL Server 2008 is the ability to store files and BLOBs directly in the file system, while maintaining transactional consistency with a SQL Server 2008 database. SQL Server 2008’s new FILESTREAM attribute for VARBINARY data type solves the age old dilemma facing developers and IT Pros: Is it better to store files directly in a database or store them in the file system with path and filenames stored back in tables to maintain the relationship with the database?

We've been fighting with this for years, for all of the reasons cited in the blog posting.

It doesn't solve one big problem, though: Some of our customers have multiple gigabytes of images and documents each. Add that to half a gig or more of transactional data, and then multiply that by a few hundred customer databases, and you've got a real challenge storing and moving database backups around.

To paraphrase (and, apparently, misquote) Senator Everett Dirkson, "A terabyte here, a terabyte there, and pretty soon you're talking a lot of data."

Thursday, May 15, 2008

An oldie but a goodie: How the customer explained it...

I just walked by Steve Burke's desk (he's our Manager of Data Conversions and Imports, or something like that), and noticed a great cartoon that describes the challenges of communicating and faithfully executing customer requirements far better than I have in previous posts. It's been floating around for a few years, but it's pretty clever.

It addresses both the argument for UI-first development and the need for a bridge (in the form of Chief Architect, in our case) between Product Management and Engineering.

Unfortunately, I have searched but have been unable to find out who the author of the cartoon is, so I can't give credit for it. If anyone knows, please submit a comment and I'll add an acknowledgment.

(Click the thumbnail to see the full-sized image.)


Wednesday, May 14, 2008

The NPI debacle in layman's terms

[Disclaimer: You probably don't want to read this. It's dry and boring. I dozed off twice while writing it. It may not even be all that accurate. Plus, you can get the same information from this CMS FAQ.

But they say that the best way to learn is to teach. So, as I struggle to understand how we got into the mess that we're in with NPIs, perhaps the best thing I can do is to try to explain it here.

So, go ahead and read if you like...but don't say I didn't warn you...]

What is the NPI?

The NPI (National Provider Identifier) is a 10-digit number used to identify healthcare providers. (A "healthcare provider"can be an individual person, as in the case of a physician or nurse; or a group of individuals that submit claims to certain insurance carriers as a single business entity.)

The NPI was mandated by the Health Insurance Portability and Accountability Act of 1996 (HIPAA). (Standard unique identifiers are required for both healthcare providers and health plans, but the identifiers for health plans have not yet been implemented.)

What does the NPI replace?

Historically, different insurance carriers have used a variety of different numbers to identify providers. Medicare, for example, used to issue its own proprietary identifiers (PIN, UPIN, OSCAR, NSC). Many Medicaid payers and most commercial payers expected the provider's EIN (Employer Identification Number, also known as Federal Tax ID). Still others required the provider's Social Security Number.

To further complicate the issue, some payers may require multiple identifiers. Others may give providers a choice of enrolling under, say, their EIN or their SSN.

What problems is the NPI supposed to solve?

All of the healthcare providers and insurance carriers in the United States are part of one ecosystem, with many millions of paper and electronic transactions taking place between the various parties every day. It shouldn't be a surprise to anyone that multiple provider identifiers would cause confusion and inefficiency.

One example: Primary claims submitted to Medicare, after being adjudicated by Medicare, are automatically forwarded on to the secondary payer (if there is one). Medicare can use the PIN to identify the provider, but the provider's Medicare PIN means nothing to, say, Medicaid or Aetna. So, in order for the claim to be forwarded to and paid by the secondary payer, the provider must include the EIN...or SSN...or the secondary payer's propietary identifier...or whatever, on the claim.

The NPI only addresses these issues if all providers and carriers switch from whatever identifiers they used in the past to the NPI. Consequently, all HIPAA covered entities (providers, payers, and clearinghouses) will be required to switch.

Who issues NPIs to providers?

The Centers for Medicare and Medicaid Service (CMS) issues NPIs using the National Plan and Provider Enumeration System (NPPES). (NPPES can also be used to look up NPIs.)

Can a single physician or other provider have more than one NPI?

Allowing a single healthcare provider to have more than a single NPI would violate the HIPAA requirement that NPIs uniquely identify a single provider. But this is healthcare we're talking about, so I wouldn't be surprised if it happens.

So, once a provider has an NPI, how do payers find out what it is?

As part of CMS's planning for the NPI transition, they conceived the notion of a "crosswalk" (a commonly-used term in healthcare that has been overloaded for this purpose). Basically, payers are expected to accept both their legacy identifiers and the NPI for a period of time, during which they are supposed to "crosswalk" the identifiers and associate the NPI with the corresponding providers.

On May 23, 2008, this crosswalk period officially ends, and all payers are supposed to accept claims with only the NPI. Of course, again, this is healthcare, so some payers (and we don't know how many) will fail to meet that deadline, or their systems will be so whacked that they will continue to reject claims until they can get their software fixed.

Tuesday, May 13, 2008

No apologies: The reality of technical debt

I attended an Agile roundtable this evening, and one of the sponsors, Jonathan Rayback (an Agile thought leader in the Salt Lake City area) introduced me to the concept of "technical debt". It's been around for a long time (at least since 1992), so I'm late to the party, but the idea really resonates with me.

The term was apparently introduced by Ward Cunningham, and has been expanded upon and clarified by Steve McConnell, among others.

Here's the definition supplied by the venerable (but oft maligned) Wikipedia (hyperlinks removed):

Technical debt is a term coined by Ward Cunningham to describe a situation where the architecture of a large software system is designed and developed too hastily.
No one who has been developing software professionally for more than 5 minutes has been able to avoid technical debt.

Jonathan illustrated the idea with a whiteboard graph that looked something like this:

In this chart, the project was expected to be completed in about 20 days. About 13 days into the project, it became clear that an additional 4-5 days would be required to complete it in a high-quality way. However, a business decision was made to stick to the original schedule by working more hours, cutting corners, or making some other compromise.

The area between the red line (the business-mandated schedule) and the green line (the "ideal" schedule, for the sake of quality) represents the business debt incurred during the course of the product.

Like financial debt, technical debt must be repaid at some point. And, like financial debt, not only the original principal will need to be repaid (in the form of refactoring, bug fixes, etc.), but also accrued interest (customer complaints, support calls, etc.)

To advance the metaphor further, Jonathan pointed out that not all technical debt is bad.

Most of us who own homes owe a sizable financial debt in the form of a mortgage. Did I make a mistake by going into debt to own my home? Certainly not: I estimate that my family's housing expenses over the past 11 years have been far less than they would have been if we had been renting during that time, even if we had lived in a much smaller home. It would have been ridiculous to wait to buy a home until we had saved up enough money to pay for one.

In the early stages of developing our software and service offerings at AdvancedMD, we incurred huge technical debt. We've had good-natured debates about whether that was a mistake or not. On the one hand, our coding and deployment efficiency is lessened by shortcuts we've taken in the past. On the other hand, we were first to market with a web-native PMS (by years), and we remain light years ahead of our nearest competition.

We've also made great strides towards paying off that debt (and minimized the accumulation of new debt as much as possible). We've rearchitected major components of our application over the years, so that, as a whole, I'd put our code up against just about anyone's. Sure, it would have been great if we hadn't had to do that rework, but, again, in most cases the debt was justified.

Not all technical debt is created equal. Here's how Steve McConnell categorizes technical debt:

Non Debt
Feature backlog, deferred features, cut features, etc. Not all incomplete work is debt. These aren't debt, because they don't require interest payments.

Debt
I. Debt incurred unintentionally due to low quality work
II. Debt incurred intentionally
II.A. Short-term debt, usually incurred reactively, for tactical reasons
II.A.1. Individually identifiable shortcuts (like a car loan)
II.A.2. Numerous tiny shortcuts (like credit card debt)
II.B. Long-term debt, usually incurred proactively, for strategic reasons

Only debt in Category I should be a source of embarassment...and, yes, we have our share of that kind of debt, although far less than a few years ago.

I make no apologies for the other types of technical debt that we've accrued, because we've overcome the odds by proving both our technology model (which, in 1999, was utterly original) and our business model.

Monday, May 12, 2008

How is your performance measured and judged?

My transition from head of Engineering to Chief Architect has been marked by one epiphany after another. Here's the latest:

In a previous post, I laid out some of the differences between the Chief Architect and the head of Engineering. An obvious question is: Why can't one person do both? Isn't architecture a key component of Engineering?

The same question can be asked about the separation of Product Management from Engineering (and, by extension, the Chief Architect). Don't they have the same basic goals of building high-quality software?

I think most people can easily distinguish between Product Management and Engineering: Product Management decides what to build, and Engineering builds it. The distinction between Chief Architect and Engineering is less obvious...

...until you think about how the levels of performance of the three departments are judged.

The head of Engineering is judged by the quantity and quality of software that comes from his teams. (The quantity is primarily a product of the developers, while QA has primary responsibility for the quality.)

If software releases are consistently behind schedule, or the support burden following releases is consistently overwhelming, where does the buck stop? With the head of Engineering. (Sorry, Sheridan.)

On the other hand, if software releases consistently fail to resonate with customers, or resources are consistently applied towards projects that yield no revenue or other value, then you have a problem in Product Management. The head of Engineering has neither the authority to decide what gets built, nor accountability.

By the same token, when releases are technically successful (goals are met and quality is high), the Engineering team has every right to celebrate and take credit. And when customers rave over the latest release because the enhancements are both timely and well-designed, Product Management can take the kudos.

So, how will my performance be judged as Chief Architect?

One thing is certain: I am no longer judged by the quantity or quality of code that gets written. I can't be, or I would be so caught up in writing code and helping the Engineering staff keep up with their demands that I would be unable to do my job, which I described in an earlier post.

Nor can it be based on the reception (hot, cold, or lukewarm) of new enhancements, by customers or by AdvancedMD staff.

Instead, my performance will be judged in more nebulous terms:

How well do I communicate our current architecture, its strengths and weaknesses, and our company's technology road map to our CEO, board, and other executives?

How faithfully do the architectural and technology changes that I propose and endorse reflect and support the high-level strategies of the company, as defined by the management team?

How effectively do I bridge the language and culture gap between the Engineering teams and Product Management?

How well do our applications and subsystems scale? How easy is it for our DCO (Data Center Operations) staff to go from supporting about 10,000 providers today to 100,000 providers in just a few years?

How stable and robust are our interfacing and interoperability infrastructures?

How well do I communicate and tout our technical accomplishments to those outside of AdvancedMD?

The prospect of finally, after eight years, having time to devote to these and other issues is exhilarating, and at the same time daunting and just a little bit scary.

The accomplishments of the past have been rewarding. But it won't be long before AdvancedMD will be asking me, "So, what have you done for me lately?" Here's hoping I have a good answer!

Friday, May 9, 2008

It's good to be king!

Only Microsoft could get away with launching a product that won't actually be released for another six months or so.

Was it just me, or was anyone else expecting a concurrent release of Windows Server 2008, Visual Studio 2008, and SQL Server 2008?

Turns out we were confused. What's been happening all around the world for the past several weeks at "Heroes Happen Here" events, is the launch of those three products, not the release.

Apparently, "launches" no longer have to coincide with "releases", as attested by Microsoft's announcement of the delayed release of SQL Server 2008, in a post on The Data Platform Insider blog on January 25, 2008.

That delay is no big deal, of course--SQL2005 is a fine product, and I think we'd all agree that a high-quality release is more important than a quick one. Just disappointing...especially to our Engineering and IT teams, who have their eyes on a couple of the juicier features.

As Joe Wilcox says on eWeek's Microsoft Watch, Microsoft doesn't need to rush, because those of us who license Microsoft products under their SPLA or annuity models are going to keep ponying up the cash in anticipation. The only possible downside may be the acquisition of MySQL by Sun. (Huh? A billion bucks for "free" software?)

Thursday, May 8, 2008

An IE security improvement that doesn't make our lives more difficult?

One of the key advantages of AdvancedMD over other (generally client/server) practice management systems is the fact that it is a browser-based application, built on the ubiquitous Microsoft Internet Explorer. That means that anyone can pick up a commodity PC at Best Buy or Costco, take it home, and run AdvancedMD without inserting a CD or contacting their PC support people.

AdvancedMD does, however, use a few ActiveX controls that allow us to do things that aren't normally permitted by the browser. Things like transparently saving temporary files to the local disk, compressing data, and managing printers.

When we first released AdvancedMD (as PerfectPractice.MD) back in 2000, Internet Explorer was on Version 5.0. Back in those days, the Internet was still relatively new, and Microsoft hadn't yet become every hacker's favorite target. So, security was a topic of discussion, but not the huge focus that it became in the months leading up to the release of Windows XP in August of 2001. (I'm relying on a Wikipedia article for these dates.)

In those good ol' days, ActiveX controls just worked. Sure, it helped to sign them (or, rather, the CAB files that contained them), but aside from that it was a piece of cake to deploy a control that could access the registry, read and write files, format the hard drive, beat the dog, stampede the horses, etc. The Wild, Wild West of the World Wide Web.

Since that time, the wizards at Microsoft have had a little fun at our expense (albeit, to be fair, to the benefit of IE users):

  • AdvancedMD domain must be added to Trusted Sites zone in order for ActiveX controls and many other functions to work.
  • The ActiveX controls within CAB files must be signed, not just the CAB files themselves.
  • By default, windows can't be sized or positioned in such a way that they appear off-screen, even in Trusted Sites zone.
  • A website can't be added to the Trusted Sites zone via javascript (IE6) or ActiveX controls (IE7).
  • On and on and on...

As a general rule, the AdvancedMD Engineering team emits a collective groan whenever a new version of IE comes out, because it means days of testing and retrofitting to comply with new security features.

IE8 will no doubt present some new challenges, but at least one new feature mentioned on the IEBlog may actually help us out.

For quite some time, some of our larger customers (the ones who actually have IT staff) have complained that, every year or so, we deploy new versions of our ActiveX controls. Since they have restricted their users' Windows accounts from installing software, their users are unable to install the new controls. Instead, an IT person has to walk from machine to machine, logging in as an administrative user and allowing the AdvancedMD browser application to install the controls.

IE8 has a new feature called "Per-User (Non-Admin) ActiveX" that, presumably, will make this a thing of the past. According to the IEBlog post:

"Running IE8 in Windows Vista, a standard user may install ActiveX controls in their own user profile without requiring administrative privileges."

Sounds pretty good to me. Now if we could just get away from ActiveX controls altogether...

Tuesday, May 6, 2008

Events and Travel Plans for 2008

For the benefit of other AdvancedMD employees who may care, and the morbidly curious, here is a list of the events that I plan to attend this year:

Microsoft Heroes Happen Here - May 20
Free Microsoft swag! They'll be giving away free copies of Windows Server 2008, SQL Server 2008, and Visual Studio 2008. (I wonder if I could trade Window Server 2008 in for a copy of Windows 2000 Server...?)

Utah HIMSS Annual Spring Conference - June 6
It's cheap and it's local. To be honest, I haven't been able to find out exactly what goes on at these things, but for $85 and no flights or hotels, I figure it's worthwhile to find out.

Gartner Enterprise Architecture Summit - June 9-13
In December 2006, I was invited by Microsoft's BizTalk team to give a 15-minute presentation at the Gartner IT Summit in Orlando about how AdvancedMD has incorporated SOA and SaaS concepts to build a successful business. I didn't have an opportunity to attend many other sessions, because I was busy hacking together my presentation, but I was sufficiently impressed by their lineup of speakers and topics that I'd like to check this architecture conference out and see whether Gartner puts on a worthwhile event.

HIMSS MS-HUG Tech Forum - August 26-27
I always learn something from these events in Redmond, beyond Microsoft's sales pitches. Most of the value comes from hearing about the challenges that other HIT people are facing, and how they are dealing with them.

Microsoft PDC '08 - October 27-31
It's been about 10 years since I last attended a PDC. Like most technical conferences targeted at developers, the PDC doesn't have many answers...but it helps you identify the questions so that you can go home and research the answers. Other premier conferences like VSLive offer certain advantages, but I attended VSLive in San Francisco a few years ago, so I'd like to check out the PDC this year.

[Edit: Scratched TEPR from my schedule. I hadn't realized before that Monday, the only day I could go, was a half day, with only the exhibit hall available in the afternoon. Too costly for just a few hours of value.

TEPR 2008 - May 19 TEPR ("Towards the Electronic Patient Record") is, to the best of my knowledge, THE EMR conference to go to. Of particular interest is their EMRCompare program, which allows participants to watch a bunch of EMR vendors demonstrate the basic workflow of their products. The main TEPR conference goes from the 19th through the 21st of May, but I'll only be hitting the first day so that I can get back for... ]