General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region Forums"Legacy Computer Systems": Interesting take on the ACA rollout problems
Via Josh Marshall at TPM:
Misunderstanding the Problem?
Are we not grasping the nature of the problem itself? TPM Reader ST says the issue isn't so much the website as legacy computer systems throughout the federal bureaucracy and the need to stitch them all together until a single interface.
From a TPM reader:
(snip)
The Healthcare.gov site itself is just like a server in a restaurant. The server may be the main point of interaction you have -- bringing you menus, taking your order, and bringing you food - but without the kitchen, there's no meal. And yet when a kitchen messes up and can't get food out, the server often unfairly gets blamed. And it doesn't matter if you have the best waiter in town if the kitchen can't get its act together.
Healthcare.gov is basically just showing you your menu of insurance options, taking your order for insurance, and bringing everything back to you when the order is complete. In tech terms, it's just the front end. All the heavy lifting takes place on the back end, when the website passes your data to an extremely complex array of systems that span multiple agencies (like so many cooks in a kitchen). A central processing hub needs to get data from each of these systems to successfully serve a user and sign up for insurance. And if one of these systems -- several of which are very old in IT terms-- has a glitch and can't complete the task, the entire operation fails for that user. Only if everything works perfectly, and the data gets passed back to the website, does the user have a good experience with Healthcare.gov.
The problem is that throwing more capacity at the website itself, or praising or criticizing how it was built, is as useless as criticizing a server when it's the kitchen that messed up. Maybe cathartic, but not much else.
The complexity involved in making all these systems work together is tremendous. Reader RN doubted that there are 500 million lines of code involved, but if you add up what originally went into building 10 or so huge systems, across multiple agencies, plus all the stuff to make them work together, 500 million lines of code might be realistic. (Especially as many of these systems are old and have been patched and built onto many times.)
The rest: http://talkingpointsmemo.com/edblog/misunderstanding-the-problem
Thoughts?
Pretzel_Warrior
(8,361 posts)instead of transactionally interacting with legacy systems for all users.
HERVEPA
(6,107 posts)to a data warehouse, I've seen that it often gets mangled and distorted in said warehouse.
Fumesucker
(45,851 posts)B2G
(9,766 posts)New web front end as well as the interfaces to legacy systems.
It's a bloody mess.
Pretzel_Warrior
(8,361 posts)know what you're talking about. It's always the API's, javascript, etc. needed to access disparate data sources.
B2G
(9,766 posts)It's obviously very buggy and it they didn't get that right, what kind of mess was made with mainframe interfaces??
And as far as me 'not knowing what I'm talking about', get back to me after you've managed large IT projects for 20 years.
Pretzel_Warrior
(8,361 posts)passing data from the web front end user inputs to the ACA website and then data calls to the other databases are what created huge snafus.
Oh well, I am sure they are working like crazy to do autopsy and put in code rewrites to get this thing running at 99% as soon as possible with numerous software updates over many multiple weekends.
B2G
(9,766 posts)my experience tells me they have their work cut out for them.
None of the experts they are 'parachuting in' are going to agree to arbitrary fix dates. Not if they have a functioning neuron. They will need time to review the code, assess the situation and put a plan in place. I know no one wants to hear this, but based on my experience, they will have to delay the mandate and most likely pull the website for a period of time.
That length of time could be extensive in political terms. I hope I'm wrong, but I fear I'm not.
nadinbrzezinski
(154,021 posts)HTML 5 compliant sites have issues playing well with COBOL
VanillaRhapsody
(21,115 posts)kiawah
(64 posts)Most big data crunching business still use it (banks, insurance companies, etc.). It's very stable and does its job well....
nadinbrzezinski
(154,021 posts)that lack of beta testing, as in extensive, and legacy computer language is leading to these issues. Chiefly, as mentioned yesterday in the NPR story, COBOL has issues talking with HTML5. It is not a matter of stability, but communications.
(As they put it, it is in the translation program)
B2G
(9,766 posts)are literally a dying breed. It's very hard to find mainframe coders anymore. Very. I know, I try.
kiawah
(64 posts)I was the youngest Cobol programmer in my office 25 years ago, and I still am today.
B2G
(9,766 posts)kiawah
(64 posts)Ohio Joe
(21,756 posts)PM me. I have over 20 years and there is little to nothing I can't do on a mainframe. I'm currently in Denver but can relocate anywhere.
snooper2
(30,151 posts)Wonder how COBOL based CRM systems will play with WebRTC...
Oh wait, it won't LOL
And why did we transition away from using CORBA again
FarCenter
(19,429 posts)If the site only had to present the user with a menu of insurance companies and their policies available in a given state, it wouldn't have to interface with as many other systems.
In order to accurately compute the subsidy for each user, it has to verify income with IRS, validate identity with Experian, check SSN with SSA, etc.
B2G
(9,766 posts)It's critical functionality. The exchange can't work without it.
I've been pointing to the interfaces for 3 weeks now. My concern has, and continues to be that there are so many issues with the front end, that all of the interfaces haven't even been excersised yet.
jazzimov
(1,456 posts)The website has to be able to talk to multiple different databases which are tailored to multiple types of OS. To further the restaurant analogy, it's like having multiple parts of the kitchen that only speak one language, which means that the server has to be able to speak multiple different languages. And if there is a miscommunication with any of them, then the whole order is rejected.
B2G
(9,766 posts)We'll just be branded as trolls.
Sad state of affairs.
WilliamPitt
(58,179 posts)if we don't know what the problem is.
B2G
(9,766 posts)with all of these new 'experts' decending will take a month, minimum.
Probably longer. But hey, I'm trying to be optimistic.
Then they actually have to fix, test and deploy. Scary is an understatement.
But what do I know? I'm just a right wing shill...
Pretzel_Warrior
(8,361 posts)B2G
(9,766 posts)That's progress.
pangaia
(24,324 posts)Don't quit.
I am a musician. I don't even know how to change a pdf into html, get the picture? But I can sure tell Mozart from Haydn.
So I follow everything I can, trying to understand who screwed up.
Listening to NPR ( I know, I know) on the way home from work today the guest was some IT guy ( you probably heard it also). And not the only IT expert I have heard, of course. He seemed to make a lot of sense...
He spoke of communicating like you do, of too many programmers to begin with, and unbelievably complicated problem to solve, that they maybe should have finished one part of the system first- say get the part up first where one could just look for the options available w/o registering..then, several months later, get another step on line, etc, etc.
He also said to bring in new 'experts' now could slow a fix down even more because the current geniuses would have to take time off to teach the newbies what had already been done, the newbies would have to study it all.
With my near-zero knowledge of computers (I DO know how to get rid of cookies and clean out the cache) I have to just listen and use my common sense. My common sense tells me this was a huge fuck up, and as you surmise, the fix may be a while in coming--giving repubs a chance to lick their chops in glee and attack and try to destroy even more. Hope I am wrong.
SamYeager
(309 posts)notadmblnd
(23,720 posts)They are dinosaurs, but they're pretty reliable dinosaurs. But having them interface with a rack full of Dell or Sun servers can/will take some time. The files from the servers probably need to be sent over to the dino and depending how big the file(s) are, it could take some time. Then, there's probably a human on the other end that is supposed to monitor the file transfers successful completion, which at that point will need to be processed on the dinosaur via a batch job which may be automatic- or not. Now if there's no one in the control room who has a technical understanding of what is supposed to be going on, mayhem may ensue when for whatever reason the file doesn't make it over to the mainframe.
When I left the IT services company that I worked at for nearly 30 years, the trend was to hire young college students on the cheap. It was my experience working with many of them that they are immature, unreliable and just not interested in gaining an understanding of the processes that they are monitoring and running. They' are happy to follow a script and in their spare time, they either play with each other or on the internet. When a problem arises, either they are unable to resolve it on their own or are required to implement huge bureaucratic process that takes hours or days to get to the end resolution.
next to nothing about IT, but I've worked for a living for 33 years now and I'd be willing to bet that the root of this whole problem can be summed up buy the word "cheap." When it comes to the knowledge base as well as the infrastructure in every field I've worked in, we've been eating our young since at least the '80's. This is the result. Hard to believe we put a man on the moon once.
notadmblnd
(23,720 posts)Back when I was still working in IT, they were gung ho in regards to ISO standards which created a huge bureaucratic, time wasting process (es).
For instance, the environment went from me being able to call a sysadmin and have them reset a password, to first opening a trouble ticket, then calling the help desk to let them know there was a trouble ticket, waiting for the help desk contact the sys admin. Then waiting for the admin to reset the password, call the helpdesk back and then wait for the helpdesk to notify me.
Things like that went from being a two minute resolve time, to sometimes hours or days depending on whether or not the person was at their desk, or at lunch or in a meeting or called off for the day. So if they're handling all their problems in a manner such as I described, yeah, it's mayhem getting it off the ground and running smoothly.
haele
(12,659 posts)And not as a porkbarrel to contractors - a WPA sort of program, that state governments can also participate in.
Are you a high school grad who wants a scholarship in IT/Computer Engineering, or even an older grad or tech who wants your Cloud/IT/IT Security certifications upgraded for free? The government will pay for it if you agree afterward graduation to work for three years as a GS-11/13 ($40K to $50K a year over the next three years) at a government facility site upgrading and administrating/debugging their new or modified systems.
You want your PHD in Network IT/IT management? We'll pay for graduate school if you work for four years working out the architecture, developing the new systems, or managing the installation and implementation.
You just want two years of college getting a liberal arts or general science degree, and need work experience along with that to parlay that into a livable wage job in the private sector? Or tried for the above IT/Computer degree and could only get a general associates because you're not really that mathmatically inclined or changed your mind about what your major was going to be?
Here's comfy chair and table, a scanner, an all-in-one computer hooked up to the government server and loaded with Adobe Acrobat Professional and/or an official Federal forms program, and a room full of files from 1820 into the mid 1990's. You owe us two full-time years, and we'll cover that two years for up to $40K a year and government bennies, depending on the COLA for where you're located after the first two years of college.
Get scanning! You can be working any number of locations with any number of Federal, State, or County records. Let's make digital copies of all these disintegrating paper records and bring them up to date.
Let's have PDFs available online with keywords for all documents and put data from records that can still be modified over time or need to be tracked (like ongoing medical records of living people or USGS records that track environmental ) into active documents that will continue to be accessed by government programs.
One can still continue to go to school for the next two years as an undergrad while they're working that sort of job - lots of people work full time and continue on for their BA or BS.
A project like this can potentially employ up to 100K Americans across the nation - rural and urban, because the majority of them would be working "on site". This will be sort of like the WPA and CCC projects during the Great Depression. Pay people a living wage - not a great wage, but a living wage, help them support their families, give them an educational boost, and bring Government records and programs out of the 20th century and baseline this information to (at least) 2014 digital standards.
It may cost a bit to implement, but it will save money in the long run. It's an investment.
Haele
B2G
(9,766 posts)Scanning? Really?
We're going to overhaul the fed's computer systems with scanners?
haele
(12,659 posts)The low-IT skill labor does the scanning. That's the way it's done at hospitals, universities, corporations, etc...
I know it's a "make-work" project that sounds stupid and wasteful, but even just considering the backlog at the VA, and the mess contractors make getting involved with the patchwork contracts to upgrade various unique systems, the government has to get a handle with oversight.
Scanning is part of the process. You don't need a graduate degree or make the effort to re-create every single word from the soil acidity report of a BLM investigator in 1960, or from a tattered readiness report from a Marine Corps training during WWII, do you?
An official electronic copy (not the Mormon church copy they "allow" outsiders to access) of the census record from a Nebraska territory backwater in 1850, or recording 1930's birth certificates from a county seat in Oregon are also important. So, the AS/AA or certificate seekers can trade two years of paid education for two years of scanning documents and data input to catch the government up.
Those records are important, just as important as leveraging additional training and education to get professionals involved with developing an actual integrated Federal computer system, that is able to create a standard that other government systems can interact with.
The problem upgrading a system as large as the Federal Government is that everyone wants to do it their own way. So someone has to take charge and pull all the strings together, and unless you want IBM, Google, or Booz Allen Hamilton to be in charge of project - and co-opting the project, because that's what happens when you have private industry managing government projects - Federal computers and records, you need to have a not-for-profit government entity such as the GSA fully in charge.
Haele
Mopar151
(9,983 posts)All need fixin', would be a good financial investment, would give a leg up to a lot of folks who need it, The old WPA acronym would be quite fitting, updated:
Work. Progress. America.
pangaia
(24,324 posts)Give them all, or those who want it, training in, IT engineering, brick laying, waste management, whatever and fix the dang place.
Same money spent, or less, and something to show for it.. and no legs blown off !
Mopar151
(9,983 posts)Food service, transportation, engineers - even if we make it a seperate service. We could employ the military to advantage in reinventing itself for a changing world, and in resuming its role as a vocational training powerhouse.
steve2470
(37,457 posts)I tried refreshing the screen and using the Incognito mode trick. Nothing worked.
I'm in no big hurry. My point is, the website is slowly improving. The best I could get before was to submit my application and it was filed away. Maybe in a month or so I can review all my coverage options and buy something.
Zorra
(27,670 posts)weasel speak.
For instance:
"And if one of these systems -- several of which are very old in IT terms-- has a glitch and can't complete the task, the entire operation fails for that user."
and
" Especially as many of these systems are old and have been patched and built onto many times.)"
What are these "old" systems he's referring to (Social Security? IRS? Experian? Blackwater?) and why doesn't he name them in the article? How does he know these systems are very old?
I'm not saying the article isn't basically true, just that I have no reason to really believe it is, except to take the author's word for it, and I don't know the author.
ljm2002
(10,751 posts)...is: K.I.S.S. -- Keep It Simple, Stupid. It sounds like that rule was ignored completely.
One of the first reports said there were simply too many files being transferred back and forth between the server system and the user's computer, and that is why the system was so easily overloaded -- if that is the case, then the legacy systems on the server side are not the issue; rather, the issue is the design of the Web-based portion of the system.
There may be issues on the server side too, that were initially masked by the frontend issues. Things can also be simplified on the server side. You don't need realtime access to a bunch of legacy systems -- in fact, that sounds rather like inviting a nightmare scenario. Without knowing more about the system it is hard to make suggestions.
This sounds very much like a loosely managed project, not surprising when we hear how many different contracting groups were involved.
Hard to imagine how much $$$ was spent on the project, for such a poor result. If the system really has 500 million lines of code, the long term goal should be to replace it.
Buns_of_Fire
(17,181 posts)that goes from A to R to C to H to P to W and THEN to B. I've seen too many otherwise workable concepts made almost useless by "designers" who were more interested in justifying their positions than in producing a solid product.
Xithras
(16,191 posts)I co-owned a 25 employee software consulting company just outside the Silicon Valley (well...Dublin anyway) for a decade. I taught computer science to college students for nearly as long. If any of my employees had written a web system for an enterprise client that was incapable of queuing requests and accounting for performance differences between enterprise systems, they'd be fired on the spot for incompetence. This is pretty fundamental 101 level stuff.
Enterprise SOA requires that performance differences between new and legacy systems be accounted for, and contingencies be put in place to account for load and communication failures between the various infrastructure components. If this wasn't done, it means that someone was mindbogglingly incompetent. No web system should EVER fail simply because a call to an external service failed. Calls should be tested, and if the expected response isn't received within an allowable period, alternate processes should be in place to allow for a graceful failure, retry the requests, queue them for later, etc. This isn't an "ideal", it's a standard when writing these types of applications. Simply failing should never be an option.
Then again, with the way government contracts are handed out nowadays, the actual code was probably written by interns and H1B's, nominally "supervised" by middle managers who last wrote code when Java was still new. We used to do contract work for the state of California, and I could tell you horror stories about the crap that other contractors foisted off onto the state on the taxpayers dime.
Buns_of_Fire
(17,181 posts)So many times, an error condition is just left hanging out there, usually by programmers who have never had to handle data-editing procedures.
Today, more emphasis seems to be placed on prettyfication (my term) than on whether or not the damned thing works.
Sloppy. And I'd appreciate it if they got off my lawn, too.
Xithras
(16,191 posts)There's a dearth of use-case analysis and error contingency planning skills among younger programmers today. This is a very real problem for big consulting companies, because they like to hire younger programmers who work cheap and don't complain about abuse. This problem is compounded by consulting companies that routinely gauge programmer performance using the "lines written vs. time spent" metric. It encourages programmers to write lots of fluff quickly, but discourages them from actually thinking about what they're writing, how it might be used, how efficient it might be, and how their code would react to an unexpected result, input, or failure. Time spent thinking is time NOT spent writing code.
In other words, they want code monkeys, not software engineers. Sadly, companies like these tend to win a LOT of government contracts because they have lower operating costs and can underbid competitors who actually spend time to develop reliable, high quality software solutions. It's a leading reason why my company eventually gave up on government work...we couldn't match the bids of the crap shovellers.
B2G
(9,766 posts)to India. Mistake one.
Xithras
(16,191 posts)There are a lot of high quality developers in India who can assemble great software products.
There are far more crappy developers who can cobble something together that "works" well enough to meet the minimum project specifications.
The problem tends to be that companies and government agencies outsource because they're looking to lower costs, and the same basic rules apply in India as in the U.S.: You get the developers you're willing to pay for. If you want the low bidder, you have to expect low quality.
How's the adage go? Fast. Cheap. Good. You only get to pick two.
B2G
(9,766 posts)they get crystal clear specifications.
That was obviously not the case here.
hollysmom
(5,946 posts)When first working with overseas companies they put the best people on the sell you job, but after they have the contract, they put trainees on it. I spent a lot of time talking to my indian friends, they were very nice people, but woefully under-experienced. It seems like their company would tell them they would get annual raises but fire them after a few years, the result was we were always training new people to do the same old work. Also the company lied a lot to get business. They swore they were CMS level 5, but I could never get any paperwork from them. I gave very specific specs and got back garbage, then I would ave to work until 8 PM so I could speak to someone on the phone because they were 12 off our schedule.
I presented the President a chart that showed how were were spending more to get the same work done in India from this company and the president said the tax savings would pay for it. I explained that somewhere else on this board.
winter is coming
(11,785 posts)but not so much on actual programming, especially when it comes to larger, team-oriented projects. That, coupled with the current tendency to fire them after a couple of years, means you get a steady stream of cheap but inexperienced coders. And if they're dealing with legacy code... let's just say it takes a fair bit of patience, experience, and skill to find your way around most legacy code.
hollysmom
(5,946 posts)is that culturally, you never say no to your boss. I had enough problems with American programmers feeling they could not say no, mostly in accounting firms, without having someone say that you needed to do whatever your boss asked you or get a bad review. I personally liked people who made good challenges to me and we came up with a better product. No one is perfect and if an underling has a better Idea, I was willing to go with it and give them the credit.
As to legacy code - there is good old code and bad old code. A good system has good old code easy to follow and documented properly.
MineralMan
(146,317 posts)First, you have to understand what errors might be encountered, then deal with each error condition in a way that doesn't crash the program or log off the user. For complex systems like this one, the errors that can occur are many, especially when dealing with third party databases. If inexperienced people are doing anything to design the error handlers, they'll miss many possible error conditions and either leave the routines hanging or crash them, or just blow the whole error off and pretend it didn't happen, letting errors accumulate until some other routine causes the inevitable crash.
Errors in user input, alone, offer many opportunities for crap to come into the routine. And anticipating user errors is fraught with danger. Prompting users to fix their errors is difficult, too, and done wrong, simply compounds the error. Someone on DU wrote about the very basic thing of selecting a username. The instructions were vague, but the requirements for usernames were precise. So, many user errors result, stressing the error-handling routines and bollixing up the works in many possible ways.
You might think that someone typing on a keyboard can only make so many different errors, but it's not true. Users do incredibly stupid things, like copying and pasting weird stuff into user input fields. Validating input is crucial. If it doesn't match the template, back it goes to the user, with additional instructions. And the additional instructions have to actually get the right input from the user. If they don't, it's all a waste of time. Coding these routines takes time and imagination and then thorough testing with all possible crap that might find its way into an input field. And that's just user input.
I hated writing error handling routines. Hated it! But, what are you going to do? If you don't, your stuff doesn't work, and you constantly run the risk of crashing whatever is running. Put simply, ON ERROR RESUME NEXT is not a workable error handling routine.
Buns_of_Fire
(17,181 posts)"Now, how many ways can they POSSIBLY screw up entering their own name?"
After my first few years in the business, I evolved into a master of defensive programming (or paranoid programming, however one might want to look at it).
That, and the fact that I HATED getting calls at 2 AM!
MineralMan
(146,317 posts)I had a small shareware software company in the late 90s and early 2000s. If I left bugs, I got support calls. I hate support calls. So, I got really good at handling user errors, and as each version emerged, it was more and more error-free. Finally, there wasn't anything left to fix. I closed the company down because shareware ceased to be a working business model.
Egnever
(21,506 posts)is the sheer number of different systems it has to pull from and compile.
You have to do all the different personal data systems plus all of the different insurance companies from all the different states some with many different companies.
Pretty daunting task and I would bet nigh on impossible to test thoroughly.
Xithras
(16,191 posts)Ultimately irrelevant. Any system can be made fast and reliable if you'll spend the money to engineer and architect it properly. If your system is reliant on connections to outside data sources, you should have enough content caching and queuing routines in place to keep performance acceptable. If that wasn't possible, they should have changed the presentation model to account for slow or delayed responses from the remote servers. The notion that a site should fail because its external connections are slow is 100% amateur.
I've been writing web SOFTWARE (not pages...software) since 1995. I've worked for massive corporations, government agencies, and small startups. I've owned a consulting company with actual employees that did tens of millions of dollars in projects during its existence, and taught software development in college classrooms after we folded it up. Heck, I'm typing this very message on one screen of my four-head programming machine, while the other three screens are full of IDE's and data related to a cloud video editing solution I'm currently writing for a client in the MOOC space.
I know how to engineer large scale web projects. More importantly, a company being paid hundreds of millions of dollars SHOULD know how to do the same thing. Believe me, and all my experience, when I tell you that blaming poor site performance on "slow external services" is NOT an excuse that would be accepted by anyone with any kind of experience working on software at this scale.
Pretty daunting task and I would bet nigh on impossible to test thoroughly.
About a decade ago my company landed a ~$3 million project to design and implement a web based incident reporting and tracking system for the California EPA. As part of that project, we brought on five QA staffers who were ONLY paid to break things and document the failures. Most had programming backgrounds, but they didn't write a single line of code on the software, and weren't allowed to interact with the developers.
They were paid $25,000 each on a 3 month contract that required each of them to document at least one new bug or recommended optimization a day. Nothing was off limits...they could throw anything they wanted at the software, hack on it all they wanted, and basically try to force every flaw out of it that they could. Minor bugs like typos earned them an additional $50 bounty per report. Major bugs that caused functionality issues got them a $100 bounty per report. And if they managed to crash the system, they'd earn a $1,000 reward.
We did that for a relatively mundane government project with a relatively modest budget. This was a half-billion dollar project to launch one of the most anticipated and high-traffic government websites to be developed since the IRS went online in the 90's. They should have had a small army of QA people testing out every use-case that could be imagined. The fact that they didn't is disturbing...someone was trying to cut corners to save time & money while padding profits.
B2G
(9,766 posts)Money, was evidently not an object.
What will be interesting to see is what comes out from the 'worker bees'. My educated guess is they told their immediate managers that it was going go be a complete clusterfuck and they were told to sit down, shut up and code.
The blame for this fiasco falls directly on senior management/admin officials that refused to listen to objections and see the warning signs for the past year.
You can force anyone to embark on a Chinese deathmarch, but you can't make them survive it.
Xithras
(16,191 posts)Rule #1 when taking on consulting clients: Walk away from impossible projects. All they do is ruin your reputation and land you in court.
If the contractor took the half-billion dollar contract knowing they couldn't meet the deadlines and requirements adequately, and deliver the product that the government expected, then they committed fraud. If they didn't know that they couldn't hit those deadlines, then they were incompetent. Either way, the blame lands back on the companies that wrote the system.
hollysmom
(5,946 posts)at least in the last few companies I worked for, IT was given dates without any rationale by sales people and managers. The jsut go by what ever people ask and then expect you to do miracles under budget.
winter is coming
(11,785 posts)I think you can guess why.
FarCenter
(19,429 posts)Centers for Medicare & Medicaid Services, a part of Health and Human Services, maintained responsibility for system integration and test.
So the "one throat to choke" is not a private company.
It's like building a house with plans you drew yourself and being your own prime contractor. If the electrical service panel won't support the HVAC system, and the kitchen appliances are incompatible with the circuits and plugs, it's your problem.
http://www.cms.gov/About-CMS/About-CMS.html
http://www.cms.gov/About-CMS/Agency-Information/History/index.html
Egnever
(21,506 posts)Working on a project that was tiny in comparison it's scope. And you had the full backing of the company you were doing the project for.
This site on the other hand has had nothing but people trying to sabotage it from the start by cutting funding every place they could. There was no way for them to possibly get this done in the time frame they had doing it your way. Too many road blocks and no way to know until very late in the process who all they would have to interface with. This one Web site has to serve over half the nation because of Republicans refusing to set up their own exchanges.
Excuse me while dismiss your comparison as ludicrous. There has never been a Web site ever trying to tie so many databases together and serve so many people at the same time while constantly being undermined at every turn.
Inexcusable my butt.
B2G
(9,766 posts)Inquiring minds want to know.
Egnever
(21,506 posts)And if you for a second think there aren't Republican funded groups actively trying to bring that site down by any means they can be it simple ddos attacks or other means you are incredibly naive.
Xithras
(16,191 posts)DDOS attacks only compromise sites that don't anticipate and plan for them. Any idiot can link up Cloudflare (or its numerous competitors) to defend themselves against that kind of thing nowadays. Hell, for only a few million of the half-billion dollars they spent, the contractor could have built their OWN Cloudflare-style filtered & distributed CDN and accomplished the same thing.
HA website architecture has changed dramatically over the past decade, and things like DDoS attacks only impact those who aren't keeping up.
There's a huge difference between running a private or low-tier standalone website in a colo or on AWS, and putting together a modern enterprise or global web system. Single points of failure get you fired nowadays.
I'm not naive, and I probably have more experience dealing with web architecture issues than 98% of the people on this board. I don't have much sympathy for low quality systems engineering, which this entire project reeks of. This was a performance failure by the contractor and developers, and they should be crucified for it.
B2G
(9,766 posts)I've seen actual costs to date to anywhere from 300-600 million.
These are not ddos attacks by some Repuke group. It has not been defunded. They have not stopped anyone from apporving incredible increases to get the system in place.
I am not the one being naive here.
Xithras
(16,191 posts)Whether or not some elements of the government supported or sabotaged their work is irrelevant. The software and systems were designed by the contractor. The contractor is responsible for the QA, and performs it internally.
As for the rest, I just don't know what to tell you. Whether they were connecting to one source or 50 is irrelevant. If they were using proper Design Patterns to standardize error handling, pooling, queuing, or whatever load-compensating measures they selected, the actual number of connections shouldn't matter.
As for the timeline, it's the contractors duty to recognize and refuse the impossible. If you take a clients money knowing that you can't deliver the product they're requesting on the timeline they require, you're committing fraud.
FarCenter
(19,429 posts)http://www.lifehealthpro.com/2013/10/22/healthcaregov-builders-saw-red-flags
Identity management, which has been a trouble spot, appears to be a pre-existing CMS system.
http://www.civicagency.org/2013/10/learning-from-the-healthcare-gov-infrastructure/
pinto
(106,886 posts)dkf
(37,305 posts)The system is only as fast as the slowest of these. I think I read the one that checks for citizenship was never built to handle all that flow.
So it checks your info with the IRS, then sees if you or your kids are enrolled in any other federal health programs, VA, Medicare, Medicaid, chip, Indian affairs, etc etc, then it verifies your id, if you are a citizen, then it sends you to your particular state's insurance options. So many interactions for each person's entry and all of them must be decently responsive.
BlueStreak
(8,377 posts)The problems that are most evident are simple web design things like handling exceptions gracefully.
And the ones I am seeing have nothing to do with legacy systems. I can't even get it to display the same set of policies consistently, and that surely should be a new database that is not imbedded in any legacy system.
There is no excuse for any performance problems accessing the database of available policies because:
a) it is read-only, and
b) it can easily be clustered to any decree necessary to meet the demand.
And there is no excuse for failing to catch exceptions and inform the user appropriately when these errors do occur.
This is Web design 101 stuff.
bluestate10
(10,942 posts)my fucking throat out when I hear republicans talking about how angry they are that the website isn't working properly. Every single person in the Obama Administration, including the President, should have known that republicans would fucking attack if even the smallest problem happened. The Obama Administration stepped into a steaming pile of shit when it could have avoided the problem by starting with the base assumption that the roll-out of Obamacare COULD NOT HAVE PROBLEMS, or republicans would be having orgasms all over Washington DC. I don't want to hear more from the Obama Administration threading the fine line between the law and the website, I want to hear that the fucking problem is fixed and people are signing up without problems.
BlueStreak
(8,377 posts)One of the biggest problems I see now is a lack of what computer scientists call "deterministic" results. That is just a fancy way of saying that for better or worse, you expect at a minimum to get the same results each time. The architecture selected to stitch together these various computer systems is faulty to the core. It obviously does not handle the most basic error conditions gracefully. When it is impossible to display some information (because of a time-out or other error) you cannot just leave a hole in your page. You must intercept the error and inform the user what is happening.
No, I'm sorry. This is not the fault of old systems. This is incompetent web design, plain and simple.
I'm not denying it can be a challenge to integrate systems from different technologies and different eras. But this happens every single day, quite successfully, in the world of IT. It is no different from saying that the Interstate highway designer must put a slow-down ramp in place when transitioning people from the highway to city streets. Every competent highway engineer knows that. Every competent IT practitioner (and I see no evidence that there were any competent practitioners involved in the healthcare.gov site) know the things I am talking about.
Let's stop trying to rationalize this mess. It is arguably the biggest IT screw-up in 25 years. But nothing is gained by looking backwards. We need to fix it ASAP.
hollysmom
(5,946 posts)Calling good business practices back to order would be, but I don't expect that to happen.
BlueStreak
(8,377 posts)This is no different from the trillions in Pentagon bids that end up completely wasted. As far as contractors go, it is just a game. They are never invested in the results. Their only motivation is to meet the letter of the bid and spend as little money as possible doing so.
We aren't going to change that in the next few weeks, and that won't help us get healthcare.gov working better. But I do agree it is a huge problem, and the sort of thing you never seem to hear Republicans complain about. They loves them some big government contracts.
DeSwiss
(27,137 posts)K&R
riqster
(13,986 posts)And many of the required interactions are mandated by law; at the same time, government has refused (for decades) to adequately fund the upkeep, upgrading, or replacement of those legacy systems.
jazzimov
(1,456 posts)BL: I almost have the sense that HealthCare.gov is in de facto shutdown. Heres why: Government has to fix the back end before the front end. The demand here is real. I dont think anyone can dispute that millions of people want to sign up. So if they fix the front end for consumers and thousands of people or hundreds of thousands of people being enrolled before they fix the back end, well have a catastrophic mess.
When insurers are getting 10 or 20 or 50 enrollments a day they can clean the errors up manually. But they cant do that for thousands of enrollments a day. They have to automate at some point. So I think the Obama administration doesnt want to cross the red line to shut the system down, but I think this is effectively a shutdown in which they dont say theyve shut it down but it basically is shut down.
(emphasis added)