Best solution I’ve got for when Docker fills up your hard drive. I think I named this cronjob correctly.
Best solution I’ve got for when Docker fills up your hard drive. I think I named this cronjob correctly.
There’s pretty broad agreement that HTTPS is the way forward for the web. In recent months, there have been statements from IETF, IAB (even the other IAB), W3C, and the US Government calling for universal use of encryption by Internet applications, which in the case of the web means HTTPS.
I’m on board with this development 100%. I say this as a web developer who has, and will face some uphill battles to bring everything into HTTPS land. It won’t happen immediately, but the long-term plan is 100% HTTPS . It’s not the easiest move for the internet, but it’s undoubtedly the right move for the internet.
The lack of encryption on the internet is not to different from the weaknesses in email and SMTP that make spam so prolific. Once upon a time the internet was mainly a tool of academics, trust was implicit and ethics was paramount. Nobody thought security was of major importance. Everything was done in plain text for performance and easy debugging. That’s why you can use telnet to debug most older popular protocols.
In 2015 the landscape has changed. Academic use of the internet is a small fraction of its traffic. Malicious traffic is a growing concern. Free sharing of information, the norm in the academic world is the exception in some of the places the internet reaches.
Users deserve to be protected as much as technology will allow. Some folks claim “non-sensitive” data exist. I disagree with this as it’s objective and a matter of personal perspective. What’s sensitive to someone in a certain situation is not sensitive to others. Certain topics that are normal and safe to discuss in most of the world are not safe in others. Certain search queries are more sensitive than others (medical questions, sensitive business research). A web developer doesn’t have a good grasp of what is sensitive or not. It’s specific to the individual user. It’s not every network admin’s right to know if someone on their network browsed and/or purchased pregnancy tests or purchased a book on parenting children with disabilities on Amazon. The former may not go over well at a “free” conservative school in the United States for example. More than just credit card information is considered “sensitive data” in this case. Nobody should be so arrogant as to think they understand how every person on earth might come across their website.
Google and Yahoo took the first step to move search to HTTPS (Bing still seems to be using HTTP oddly enough). This is the obvious second step to protecting the world’s internet users.
Unfortunately you can no longer be certain a user sees a website as you intended it as a web developer. Sorry, but it doesn’t work that way. For years ISP’s have been testing the ability to do things like insert ads into webpages. As far as I’m aware in the U.S. there’s nothing explicitly prohibiting replacing ads. Even net neutrality rules seem limited to degrading or discriminating against certain traffic, not modifying payloads.
I’m convinced the next iteration of the great firewall will not explicitly block content, but censor it. It will be harder to detect than just being denied access to a website. The ability to do large-scale processing like this is becoming more practical. Just remove the offending block of text or image. Citizens of oppressed nations will possibly not notice a thing.
There’s also been attempts to “optimize” images and video. Again even net-neutrality is not entirely clear assuming this isn’t targeted to competitors for example.
True, but let’s be honest, it’s 8,675,309 times better than using nothing. CA’s are a vulnerability, they are a bottleneck, and a potential target for governments looking to control information. But browsers and OS’s allow you to manage certificates. The ability to stop trusting CA’s exists. Technology will improve over time. I don’t expect us to be still using TLS 1.1 and 1.2 in 2025. Hopefully substantial improvements get made over time. This argument is akin to not buying a computer because there will be a faster one next year. It’s the best option today, and we can replace it with better methods when available.
First of all, domain validation certificates can be found for as little as $10. Secondly, I fully expect these prices to drop as demand increases. Domain verification certificates have virtually no cost as it’s all automated. The cheaper options will experience substantial growth as demand grows. There’s no limit in “supply” except computing power to generate them. A pricing war is inevitable. It would happen even faster if someone like Google bought a large CA and dropped prices to rock bottom. Certificates will get way cheaper before it’s essential. $10 is the early adopter fee.
True, not everyone is supporting it yet. That will change. It’s also true some (like CDN’s) are still charging insane prices for HTTPS. It’s not practical for everyone to switch today. Or this year. But that will change as well as demand increases. Encryption overhead is nominal. Once again pricing wars will happen once someone wants more than their shopping cart served over SSL. The problem today is demand is minimal, but those who need it must have it. Therefore price gouging is the norm.
Yes, seriously. HTTPS is the right direction for the Internet. There’s valid arguments for not switching your site over today, but those roadblocks will disappear and you should be re-evaluating where you stand periodically. I’ve moved a few sites including this blog (SPDY for now, HTTP/2 soon) to experience what would happen. It was largely a smooth transition. I’ve got some sites still on HTTP. Some will be on HTTP for the foreseeable future due to other circumstances, others will switch sooner. This doesn’t mean HTTP is dead tomorrow, or next year. It just means the future of the internet is HTTPS, and you should be part of it.
Decided to replace the aging MySQL 5.1.x on a CentOS Box with a newer Percona Server 5.6. First step was to update MySQL 5.1 to 5.5. This went relatively smoothly after I figured out some mySQL transaction kung-fu and ran
mysql_upgrade. Step two was to replace it with Percona Server. It installed fine. Almost to simple. So naturally I ran:
which resulted in a dreaded:
Starting MySQL (Percona Server).... ERROR! The server quit without updating PID file (/var/lib/mysql/SERVERNAME.pid)
After a few minutes of pouring through the logs I noticed this little nugget:
2015-04-25 19:18:16 18234 [ERROR] /usr/sbin/mysqld: unknown variable 'table_cache=7K'
Apparently around MySQL 5.1.3 they replaced
table_open_cache. A simple rename in my.ini, and we’re on our way. Now running a little faster thanks to some much newer DB binaries.
At least in Mac OS X 10.8+ with recent versions of Wireshark (1.10.8 tested), it’s simply a matter of running the following command:
sudo sh /Library/Application\ Support/Wireshark/ChmodBPF/ChmodBPF
Not sure why WireShark can’t just prompt and do it automatically, but this seems to fix the problem.
PCWorld has a pretty interesting story on Microsoft’s R&D efforts. While Microsoft is viewed as an old technology company they aren’t done innovating. In many ways they remind me of AT&T in the Bell Labs days. It’s very possible some of the best research of the day is being done there, and we quite possibly won’t realize it for years to come, and will do so in some derived way.
The research and innovation methods of companies is always interesting. Big companies who invest big bucks with little guarantee of a payoff are the most interesting. We rarely hear/see much about them though.
Windows 8 has launched, and it’s been quite silent. Not many seem to even care. Every media outlet has some coverage, but it’s hardly the buzz that Apple even got for their latest upgrade. Certainly not the buzz iOS 6 got. Sign of the times.
I’ve got 2 computers that currently run Windows 7. I think at least one will be upgrading to Windows 8 in the next week or two, I just haven’t decided which will be the one to go first. It’s not a bad OS in my experimentation, the UI takes some getting used to, but otherwise it’s really not bad. Do I “like” it? Not terribly much, but I didn’t “like” Windows 7 either.
Upgrading before January 31 is discounted to $40, worth taking advantage of if you can.
Kaspersky Lab, of AntiVirus fame is apparently developing its own operating system:
We’re developing a secure operating system for protecting key information systems (industrial control systems (ICS)) used in industry/infrastructure. Quite a few rumors about this project have appeared already on the Internet, so I guess it’s time to lift the curtain (a little) on our secret project and let you know (a bit) about what’s really going on.
Sounds like a competitor for VxWorks and other embedded systems. More competition is good since this will cause other OS’s to strengthen to compete. There’s really nobody on the market other than OpenBSD that markets itself primarily as being secure.
In an interesting move Microsoft announced it will sell Windows 8 Pro upgrades for $39.99 at least initially. Windows 7 Pro upgrade is about $150. It’s a huge price cut. Also noteworthy is that you can upgrade from one of the more basic versions of Windows 7 to pro. Microsoft also reduced the number of editions down to 4.
Given Apple has been doing under $30 for upgrades, it was only a matter of time. However in Apple’s case, the software is an accessory to the hardware. In Microsoft’s case, they don’t sell hardware. My bet is they are hoping the OS will be the platform to which users engage with Microsoft services.
This is somewhat ironic given Windows 7 was a modest upgrade (technology wise) from Vista. Windows 8 is a complete rethinking and a much bigger investment. The pricing is inverted.
This is Microsoft’s big move to not be marginalized by the internet into the “expensive software that you don’t really need to run a web browser”. Microsoft just kept themselves relevant. The question however is can they make a business out of this strategy?
On how Google deals with leap seconds:
The solution we came up with came to be known as the “leap smear.” We modified our internal NTP servers to gradually add a couple of milliseconds to every update, varying over a time window before the moment when the leap second actually happens. This meant that when it became time to add an extra second at midnight, our clocks had already taken this into account, by skewing the time over the course of the day. All of our servers were then able to continue as normal with the new year, blissfully unaware that a leap second had just occurred.
Good idea. The second itself is meaningless. Spreading it out is much better/easier than accommodating for it in the rest of your stack.
In both cases corporate culture ironically killed everything they were trying to acquire. In both cases, they could have been huge, had they been agile enough to stay on the bleeding edge.