If watching your in-laws awkwardly bicker on Thanksgiving weekend wasn't enough for you, this Docker vs. Rocket thing feels like a full-blown go in the Octagon.
We've seen a landslide of vulnerabilities announced in the last few months, from ShellShock to Poodle, and it looks like that trend will only continue. The discovery of a critical vulnerability in Windows SChannel–and the even worse problems introduced with a hasty patch–has added a heap of unplanned work for Windows IT pros.
GuardRail provides a really easy way to validate that the update has been successfully applied and the registry keys deleted. In addition to giving you validation that patches have been applied now, our Schannel check can be run automatically to protect against regressions.
ScriptRock attended the DevOps Enterprise Summit recently, and we had a blast. We talked to people non-stop for three days, gave countless GuardRail demonstrations, caught a few talks, made some new friends, and learned a lot from attendees about the kinds of challenges they face implementing DevOps. (And hey, did you guys try those breakfast burritos they had on day 2? Delicious.)
A vulnerability was recently announced by Google, named POODLE, which targets SSLv3 connections. SSLv3 is an older encryption protocol in the SSL/TLS family. Most modern browsers default to newer versions of TLS instead of SSL, e.g., TLSv1.2.
News about the major bash vulnerability dubbed Shell Shock is reaching far and wide at the moment, and for good reason — its effects have the potential to reach even further than its distant cousin Heartbleed had previously. IT departments have been scrambling not only to patch machines, but to even find affected machines on their own networks. As config monitoring becomes commonplace, however, today's headache will probably be remembered as something that could've been just a simple nuisance.
While both OpenSSL (responsible for Heartbleed) and the bash shell (where Shell Shock gets its name) are found in datacenters and businesses in every corner in the world, that's where the similarities end. The mechanisms exploiting the two vulnerabilities are entirely different, despite the tech media continuing to compare the two.
Some people, we won't say who, have taken to poking fun at the idea of thought leadership in DevOps. We'd like to set the record straight: here at ScriptRock, the only problem we have with thought leaders is that there aren't enough. Since we believe in continuous improvement, we've taken the first step to addressing this issue. With our elegant "DevOps Thought Leader" shirt anyone can be part of the DevOps intellectual elite.
When you want to win, you don't attack where your opponent is strongest; you hit them where they're weakest. Quarterbacks throw to the receiver covered by an injured corner, bike thieves look for the bike with the weakest chain, and lions drag down the wildebeest at the back of the pack. The larger the surface area, the more likely there is to be variation in the strength of defense, and the larger the difference between the strongest and weakest points.
In theory, DevOps is good for every business. But if there's one thing I've learned from talking to people in the DevOps community, it's that theory doesn't always translate perfectly to reality. Theory is an advertisement; reality is a data set. That's why ScriptRock partnered with Microsoft to sponsor a DevOps study from Saugatuck Technology.
There’s no right place to start with DevOps, but there are reasons that different people choose to start. There are also ways of communicating that make it more likely to take succeed in your organization. Being aware of the people you are talking to and the processes they work within can make your DevOps experiments more likely to grow into a business-wide culture.
Imagine this — you're rolling out a new version of your web app. Works great in the dev environment, and it's been signed off on in staging, so it gets rolled out to production. Things seem fine, so you call it a night.
Then the support requests begin flooding in. Something's broken somewhere, and it's not immediately obvious how. Performance monitor shows the machines are running well, so it can't be that. Ah well, better crack one of those neon-colored energy drinks, it's time to roll back and log into these machines to look through logs and config files for a potential cause. "How could this be happening," you ask, "I mean... these machines are all configured the same, right?"