With so many retailers being impacted by cyber attacks, it’s easy to conclude that thieves are necessary for data breaches. Not necessarily. Saks last week made clear that it can breach itself quite efficiently.
That revelation comes courtesy of Buzzfeed News, which visited the site and noticed private data about quite a few fellow site visitors and shoppers.
"Until recently, unencrypted, publicly accessible web pages on the site contained tens of thousands of records for customers who signed up for wait lists to buy products," the story said. "The records included email addresses and product codes for the items customers expressed interest in buying and some also contained phone numbers. Each record also included a date and time and one of a handful of recurring IP addresses."
Saks leapt into action, quickly removing the forbidden data — right after a reporter called the site seeking comment. *sigh*
I wish I could say that this kind of self-leaking of sensitive data is surprising. Earlier this month, I was trying to update a license key with an Acronis backup product. During that process, I would log in and the system would repeatedly log me and then log me back in under someone else's account. Yes, you read that right. It allowed me to access personal details — and, apparently, cloud backup — of other customers, although I opted to not touch that data. A cyberthief stumbling upon such data would have been unlikely to be so kind.
By the way, when I flagged the problem to Acronis, the customer service person denied that it happened even though it happened again while sharing my screen.
Although Saks quickly addressed this issue once it was flagged, the incident illustrates two very different security holes in Saks' operation.
Problem #1: Saks didn't catch the issue itself. Site testing should never stop merely because a site is launched.
One of the magic joys of HTML is that code, seemingly untouched, can develop its own hiccups. Many developers believe in the myth that once code is tested and it works properly, it will forever work just as properly, up until the point that someone changes the code. That certainly seems reasonable and logical, but how many incidents that disprove that theory have to happen before that myth is forever abandoned?
Blame it on HTML gremlins if you like — I typically do — but sites need to be continually tested and probed. This way, there's a good chance that your team will detect — and then fix — any problems before customers, the media or a lucky cyberthief stumble on them.
Problem #2: The leak happened in the first place and happened unencrypted.
Due to the gremlin problem noted above, there is a finite amount of blame that can be assigned to Saks. Well, that's true at this point. If it later gets disclosed that the Saks leak was due to human error — or especially a disgruntled employee or contractor — then blame may be appropriate.
The problem isn't entirely with the data leaking. It's the apparent fact (see the screen captures in that Buzzfeed story) that this sensitive data was stored in clear text. Why? Had it been encrypted, it could have leaked out and delivered very little — and possibly no — damage.
Some managers resist bothering with encryption, arguing that the data is only accessible from inside a LAN that has its own robust authentication protections. The problem is that is true only in theory. If anything goes wrong, as Saks is now discovering, keeping all sensitive data encrypted at all times is the best strategy.
This article is published as part of the IDG Contributor Network. Want to Join?