February 19th, 2019
If all goes to plan, this will be the question that everyone is asking this week. The problem with the plan is that no one has heard of GYRT before, and no one knows what swapkown means!
As usual, it’s all part of my master plan to force you to seek help when you need it. What most people do when they run into a problem is give up. But in order to learn how to code, you need to have a different mindest. You have to see instructions to swapkown your keyboard with GYRT, and you have to simultaneously realize that you have no idea what that means or how to do it, but also believe that you have the power to figure it out.
In other words, regular people see an instruction to perform a swapkown on your GYRT and they think “I have no idea how to do that.” Coders see that same instruction, and they think “I have no idea how to do that… YET!” And then they figure it out!
So how can you figure it out? You can ask your friends on Slack. You can ask the massive distributed artificial intelligence in your pocket. You could form a question on Stack Overflow. Any of those things will hopefully get you an answer pretty quickly.
So after that preface, what’s the actual answer? The answer is here: swapkown.
permalink
August 16th, 2018
Here’s my idea. You take the brain of a Tesla Model 3 and all of the associated autopilot sensors and you install it in one or more intersections. Maybe you add some extra sensors to other locations so it has expanded awareness about traffic which is approaching the intersection.
Then you can do a few cool things.
First, you could have the intersection intelligently direct traffic for optimum flow and minimum wasted energy. It would be aware of not only nearby vehicles, but pedestrians and random obstacles–anything that an autonomous car could be aware of. Think about how many times you’ve stopped at a red light when you didn’t need to, or when if the light had stayed green for just two seconds longer you wouldn’t have had to stop and no other cars would have been seriously inconvenienced. Over the course of a year, a single intersection like this would save thousands of person-hours and many tons of CO2.
You could also have the intersection broadcast the info it has gathered over a V2V network. This would give all nearby autonomous cars additional information which would not only make them safer. The autonomous intersection would have better info than an autonomous car because it has the aditional power of being in control of a finite area with relatively fixed context. That would enable it to be more intelligent about stuff like large non-moving obstacles that might suddenly appear in a roadway. Autonomous cars have a hard time knowing if a big non-moving rectangle is a dangerous road hazard or an innocuous sign.
It would intelligently route traffic to easily allow emergency vehicles through with a minimal disruption of surrounding traffic. It could also prioritize mass transit vehicles like busses, which would reduce the time it takes to get places on a bus and make them a more attractive and feasible option for commuters.
The more of these autonomous intersections you add, the more intelligent and efficient they become. Two adjacent intersections could share knowledge to expand their reach in smoothing out traffic flow. A whole city could optimize traffic on a massive scale. A city-wide grid of these would also be able to provide traffic information to cars and route-planning software that would allow trips to predict with high accuracy when and where traffic would form in the future and route around it.
A city like this would also be able to solve some of the expense issues with lidar on autonomous cars. Lidar scanners are super expensive, which is why Tesla has gone the route of creating an autonomous system that doesn’t use them. But while visual-only autonomy can absolutely be superhuman, lidar provides unquestionably more precise and accurate information about moving objects in a 3D space, especially in dense urban areas with many types of moving vehicles, pedestrians and objects, particularly in intersections. So instead of installing one lidar for every single vehicle in a city, you could have one lidar installed at each intersection, which would make all autonomous cars safer in the city while saving the expense.
permalink
June 29th, 2018
We should be critical of others and three times more critical of ourselves, because it’s ten times harder to see your own flaws than it is to see the flaws in someone else.
permalink
February 3rd, 2015
So today I discovered that the HTML videos that I made for Educator.com a few years back are being pirated. You can find various torrents all over the internet!
People are willing to break the law to hear me teach! I don’t know why I trust the words of a pirate, but the description of the first one I saw came across to me as high praise.
A nice training session for those who want to learn the dark arts of Proper website building without all the WYSIWYG crapware that’s flying about on the internet….Even the more experienced users will find this interesting….
I made these videos for Educator a long time ago. I figured they’d be mostly forgotten about by now, but I still get ‘thank you’ emails from people who have taken the course and somehow figured out my email address (with a little research it isn’t hard), so clearly people are still watching them. Now that I think of it, I also still get a paycheck from Educator every month, which indicates that at least some people are watching the non-pirated versions on Educator.com.
This isn’t the first flattering but slightly odd realization I’ve had regarding these videos. Have you seen my twitter account? Yep, that’s my name, and that’s a photo of me, but that’s not my account! It’s an impersonator! And it has more followers than my actual twitter account! What does that say about me? (Probably just that I don’t really use twitter.)
The experience that takes the cake, however, was my first celebrity sighting—where I myself was the celebrity! I was working the JPL Open House at the robotics tent when someone came up and said “I took your webmaking course!” I was so surprised I didn’t even know what to say or do. I think celebrities are supposed to want to avoid attention, right? I kind of felt like I was obligated to sneer at her and put on dark sunglasses, but I was so flattered that I wanted to hug her. I guess the right thing to do would have been to get a picture with her. Next time? Will there be a next time?
permalink
December 18th, 2014
In the last four websites I’ve built, I’ve used four different jQuery carousel plugins. I’m not really a super fan of carousels, I think they’re kitschy and make for bad UX. They’re the twenty-ten’s version of a flash intro screen. But designers throw them in and clients like them, and I don’t always get to make all the decisions around here.
Since I’m pretty much an expert at it by now, let me tell you about the process of how to find a jQuery carousel that you hate. First, you hit up Google, and pin the command key down while clicking on links until your tabs look like the Nintendo version of the Sierra Nevada mountains in the top of your browser.
Then you sift through all the articles like “500 Great Responsive Carousels” and “25 Carousels that are the Last Carousel You’ll Ever Need” until you have a list of twenty or so GitHub pages. Then you start looking through the demos, most of which won’t work. Then you download the code and see if you can get it working. Most you won’t be able to, so you spend some time looking at the documentation and making adjustments and fixes until you get some of them to work. Then you exclude the ones that make the browser run slowly because they’re so bloated. Then you spend some time figuring out the API’s that the plugin vendors wrote to make it “easy” for you to change up the features. When they prove too much effort, you just rewrite the code yourself, slashing out wide swaths of code you probably don’t need until you’ve got a somewhat lean plugin. Then you spend forever working around the skins that they built in since your carousel needs to look different. In the end, you’ll end up with one plugin that kind of works, but is buggy or broken bloated or otherwise inadequate.
I finally got tired of this process, which is why I have contributed my own addition to the jQuery carousel mess. I call it Gallop, and it’s intentionally very simple, barebones, and featureless. We’re web developers. We know how to tweak some jQuery to get the features that we want working. We can adjust CSS to make things look right. We don’t need massive plugins with complicated mechanisms to control a million possible configurations. Maybe we just want some div
s to slide across the screen, ok? Is that so hard?
No, it turns out it’s not so hard. Less than 100 lines of jQuery, a little more CSS. You just put an unordered list in a div
, and it works. The code is incredibly easy to follow, and if you want to change around the features, you just directly edit the code! If you want an additional feature, you just add it right in!
Anyway, here’s the GitHub page. I even made a fun little demo for it as well. Hopefully this simplifies the jQuery plugin process for some people.
permalink
November 10th, 2014
Firefox just came out with a developer edition of their browser.
You can install it parallel to your existing Firefox installation. Has a ton of cool features for developers built right in, and is kept up to date on the “aurora” channel so you can test with all the newest features currently in development months before they come out on the standard release channel.
More info here and here, and a video here.
permalink
October 25th, 2013
Google has announced that they will be forking WebKit to create a new rendering engine for Chrome called Blink.[1][2] Google will be able to remove millions of lines of code from Blink that isn’t needed for Chrome, and will be able to work more aggressively to develop Chrome-specific features into Blink.
This won’t unsettle things too much—at least not right away. The latest builds of Canary are already using Blink, and there is no perceptible difference. The CSS -webkit-
prefixes even still work. So how long will it be until we have to switch over to using a -blink-
or -chrome-
prefix for our rounded corners? The answer to that question is: “wrong!” There will be no -chrome-
prefix, or any other vendor prefixes at all, moving forward. Instead, you’ll have to change a setting in your browser to enable experimental CSS features.
Vendor prefixes seemed like a good idea. And maybe they would have worked, if developers had used them as intended. Eric Meyer wrote the seminal article advocating vendor prefixes, Prefix or Posthack. He explained that we could use vendor prefixes to escape the conflicting standards of the bad old days, when the same code was rendered differently by different browsers. Due to Internet Explorers broken box model, width
meant one thing in Trident (IE’s layout engine) and something entirely different in Gecko (FireFox’s layout engine). Furthermore, because these differences were alive and living on the internet, there was no way for either browser to fix (or compromise) its definition to unify the web without breaking existing pages. Internet Explorer eventually fixed it’s box model, but it has been an arduous, difficult journey, one that has not yet reached it’s destination. Despite years of very clever attempts to fix things with the creation of quirks mode, doctype switching, and for IE, conditional comments and the problematic compatibility view (which still persists in IE10!).
With vendor prefixes, none of that would happen, because new CSS features would go through a vetting period, during which the feature could only be accessed by prefixing a code to the property name. After a time of testing, and when the standard was defined more definitively, the browser could begin supporting the property without the prefix, and nobody would have incompatible CSS anymore!
This quickly created a situation where the simple task of rounding some corners turned into this mess:
-ms-border-radius: 10px;
-moz-border-radius: 10px;
-webkit-border-radius: 10px;
-khtml-border-radius: 10px;
border-radius: 10px;
When it comes to background gradients, things are even worse:
background: -moz-linear-gradient(top, rgba(255,255,255,1) 0%, rgba(255,255,255,0) 100%); /* FF3.6+ */
background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,rgba(255,255,255,1)), color-stop(100%,rgba(255,255,255,0))); /* Chrome,Safari4+ */
background: -webkit-linear-gradient(top, rgba(255,255,255,1) 0%,rgba(255,255,255,0) 100%); /* Chrome10+,Safari5.1+ */
background: -o-linear-gradient(top, rgba(255,255,255,1) 0%,rgba(255,255,255,0) 100%); /* Opera 11.10+ */
background: -ms-linear-gradient(top, rgba(255,255,255,1) 0%,rgba(255,255,255,0) 100%); /* IE10+ */
background: linear-gradient(to bottom, rgba(255,255,255,1) 0%,rgba(255,255,255,0) 100%); /* W3C */
filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#ffffff', endColorstr='#00ffffff',GradientType=0 ); /* IE6-9 */
Not only does each browser get its own line, the WebKit browsers actually use two lines, due to a change in syntax. The messiness of this is annoying, but it isn’t actually that huge of a problem, in terms of web compatibility. In fact, it saved us from having to write crazy javascript to detect the browser and version number in order to give one syntax to < Webkit 4 browsers and another syntax to all others. The real problem is that this is how most developers began coding their sites:
-webkit-border-radius: 10px;
border-radius: 10px;
or even just:
-webkit-border-radius: 10px;
You can see how this sort of abuse caused problems. Developers got used to Chrome being on the bleeding edge of CSS3 features, and began coding specifically for it without taking the time to check whether other browsers had supported the features yet. Effectively, vendor prefixes had become browser detection all over again, and developers were using this browser detection to block functionality from all browsers except WebKit. As you can imagine, this annoyed other browser vendors, like Opera, who were working hard to keep their browser up to date with CSS3, but which was blocked from applying the styles due to lazy coding. Developers had turned -webkit-
into the very -beta-
prefix which Eric Meyers dismissed in his article.
The -beta-
prefix was proposed by Peter-Paul Koch as a compromise to his original proposition to abolish vendor prefixes. Koch’s vision was prophetic. In March 2010 he wrote:
Eventually Opera will discover that plenty of sites use -webkit-transition
, but not -o-transition
. Will Opera start to support -webkit-transition
, too?
Despite a widely circulated call for action to web developers to change their ways and prevent “judgement day,” two years after Koch’s prediction, Opera announced it would support -webkit-
prefixes. Mozilla had been planning to do the same. Microsoft’s massive attempt to change the bad behavior of developers had failed, and they too would follow suit.
It looked like things were going to get really messy. Then suddenly, Opera announced that it was abandoning its rendering engine Presto, and would instead use WebKit. While many saw this as a sad day for the web, the silver lining was that judgement day had been pushed back a little. Furthermore, Opera is adopting Blink with Google, and Mozilla is creating yet another rendering engine called Servo. So we aren’t looking at a rendering engine monoculture or a return to the “bad old days.”
So with all these new rendering engines, what will happen to vendor prefixes? They’re all going to go and live with the <blink>
tag and IE’s broken box model. Mozilla is heading in the same direction as Chrome: “avoiding vendor prefixes by either turning things off before shipping or shipping them un-prefixed if they’re stable enough.” This is where all the vendors are headed, and the W3C working group on CSS has already put together a policy that rang the death knell of vendor prefixes.
This article originally appeared on the shoutleaf.com blog. It was cross posted by permission the owner of Shoutleaf (me) and the author of the article (me).
permalink
April 28th, 2012
I think of a Chef as a sensitive guy
Cutting onions makes him happy, but it also makes him cry.
permalink
January 27th, 2012
If you’re in the web developer world, you’ve noticed by now that there isn’t a space before the ‘5’ in HTML5. This is different than HTML 4.01, HTML 4, HTML 3.2, and HTML 2.0. Why this new direction? Actually, it’s more in line with the old direction than most realize. The first version of HTML wasn’t called “HTML 1.0,” in fact there is no such thing as “HTML 1.” It all started out with a document called “HTML Tags,” which is, for all intents and purposes, the first version of HTML. Things were a little muddy there for a while as updates came in rapidly, but the next real version was called HTML+. Sure, you can go back and find the incremental numbered versions of HTML in some official specs, but in terms of reality and use, there was HTML Tags, then HTML+. These were real innovations and steps up in the language, but after that there was a drive to clean things up a bit, and the numbered versions dominated. I’m not saying that HTML 4 wasn’t groundbreaking and painfully needed (and it took way too long for developers and browsers to implement), but it was truly an incremental update, fixing and adding things that were obvious next steps.
Things changed with HTML5, which starts out with an awareness of the internet today. A search for “HTML 5” will find any page that mentions HTML and has some numbers on it–not very useful. But removing the space allows search engines to key in on just the term we need. This is just the first indication of what characterizes HTML5: an awareness of the world as it is today, and where it is going. We learned from HTML 4 that the internet is run by people, lots of people, and an official specification or update doesn’t do anything unless all those people–browser vendors, web developers, users–get on board. What if Tim Berners-Lee had submitted his “HTML Tags” document to the IEEE or some official organization, got it approved, registered, published, and then just waited for the world to adopt it? We wouldn’t have the internet we have today.
Tim Berners-Lee made something that worked, and he publicized it–created documents to help people use it, he facilitated it’s growth. The same with the market-minded naming of HTML+. It’s true that HTML was in deep need of regulation, but regulation was not enough to magically make HTML 3.2 or HTML 4.01 become a universal standard. That’s why HTML5 doesn’t have a space. And why it has it’s own logo. Have you noticed that people don’t really talk about “Web 2.0” anymore? HTML5 has completely encompassed both that term and that concept–HTML5 has come to mean not just the next version of HTML, but the next version of the web. It includes the newest version other languages, like CSS 3, dozens of little technologies like offline storage and location detection, it includes rapid-release browser schedules like Firefox and Chrome have, it stands for everything the internet has been waiting for.
Here’s something that I think is telling. The doctype for HTML 4 looked like this:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
Besides being messy, you notice the prominent versioning of it. 4.01. But here’s the HTML5 doctype:
<!DOCTYPE html>
No version number! HTML5 is the end of the idea that we can just release a new version of HTML and wait decades for people to update. From now on, the expectation is that you’re up to date–you’re using the latest browsers with the latest standards and the latest technologies. If you’re not, we’ll still display your content, but the internet is done pandering to the lowest common denominator. HTML5 is not the next version of HTML, but rather it’s a vision for the future of the internet, and it has a lot more in common with HTML Tags
than with HTML 4.01 Strict
.
permalink
August 18th, 2011
People are beginning to hear about this idea of using words and spaces to make strong passwords instead of crazy characters. Cases in point: “fluffy is puffy” is more secure than “J4fS<2”, and “correct horse battery staple” is more secure than “Tr0ub4dor&3”, while the more secure passwords in both cases are easier to remember.
When people see stuff like this, they seem to make a few mistakes due to not really understanding the principles behind this. Using the word method is useless if your password ends up being short (less than 12 chars), or if you use a phrase (“happy go lucky”), or if you draw from a limited set of words, (“five three two nine six four eight ten three two”) . Using only common words is bad too.
Here is one way to think about why: the security of your password can be measured by the number of possibilities. Traditionally, this has been measured character-by-character. So in a lowercase letter only password, there are 26 possibilities per slot. A six slot password has 308,915,776 possibilities (which is not very secure).
“hsufbe” = 6 chars, 6 slots
(26)^(6) = 308,915,776
The problem is that that is only true if password guessers work like this: aaaaaa, aaaaab, aaaaac, aaaaad… and so on. But if your password is “happy!” then it’s going to get guessed by a dictionary attack much much sooner.
Therefore, we need to make a more general rule:
(values in category) ^ (slots) = possibilities
If you’re working with characters on the keyboard (e.g., these: abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ 01234567890 `~!@#$%^&* ()_+-={}[]\|;’:”,./<>?) then you have 95 values in the category. A six character password is then 735,091,890,625 combinations.
“h&’8,}” = 6 chars, 6 slots
95 ^ 6 = 735,091,890,625
While this is better, it’s still not fantastic. But here’s where English words really come in handy. There are between 300,000 and a million English words, depending on your dictionary and how you define words. Let’s use the lower range and assume 300,000 possible values per slot (the slots are the WORDS now, not the characters).
“fluffy is puffy” = 15 chars, 3 slots
300,000 ^ 3 = 27,000,000,000,000,000
“correct horse battery staple” = 28 chars, 4 slots
300,000 ^ 4 = 8,100,000,000,000,000,000,000
The important thing to notice here is that we’re not calculating by characters anymore–a brute force cracker would have an impossibly hard time. But a dictionary attack cracker is going to have the best shot, so that’s what we’re looking at.
Even though you’re using words with 5 and 6 characters, you don’t get to count each character as a slot: they get chunked into one slot. Similarly, if you use a phrase, even though you’re using multiple words, you don’t get to count each word anymore: they get chunked into a single slot of phrases. I have no idea how many common phrases there are, but I’m sure there are password programs that take sentence fragments from the internet and try them as passwords. What is the probability that such a program will hit upon the phrase “jimmy crack corn?” Hard to say. If it’s drawing text from a transcript of Pinky and the Brain, then your odds might be pretty bad. But the big point is that your slots have now been reduced to 1. Let’s assume that the cracker is drawing from a trillion phrases.
“jimmy crack corn and i dont care” = 32 chars, 1 slot
( 1,000,000,000,000) ^ 1 = 1,000,000,000,000
Not very good. Barely better than a 6 char password, even though it’s 32 characters and 7 words long.
So: things to keep in mind. Draw your “chunks” or slots from categories with very large number of values. The more the better: drawing common English words is ok if you use a lot of them. Drawing from a larger range of English words (e.g. include scientific words, place names, proper nouns, stuff that would get you disqualified from Scrabble) means you can get away with using less slots. If you also use other languages, you’re even better off.
But also remember that you’re always limited by the less sophisticated password cracking algorithms. The following words are all extremely uncommon words drawn from various languages and various technical terms: xi af ju . Let’s assume that for some strange reason you’re somewhat familiar with these words and so it’s easy for you to remember. So you might think:
“xi af ju” = 3 slots
(~6 million )^ (3) = (absurdly large number)
but in fact
“xi af ju ” = 8 slots
(27 )^ (12) = (282,429,536,481)
So you’ll beat the sophisticated dictionary attack but lose at a persistent brute force attack. Likewise, you may have several words that are all part of some similar category (e.g. numbers, as from the example above) in which case you now only have ten values per slot, even though each slot is multiple characters long. Similar story if you happen to choose all words that are in the 1,000 most common words, because the dictionary program may be using only those 1,000 common words, reducing your values per slot from 300k to 1k.
Lastly, and this should be obvious, but once a random assortment of words or characters goes on the internet or becomes famous, it effectively is the same as a word or a phrase. If you see an example of a secure password on the internet, (here you go: “D&hjd6G44@#46″;}{neh*(Jeheg$#@EfTGTgSYhs” ) it automatically ceases to be secure because some programs build their dictionaries from the internet. That means that that “secure” password back there is no longer secure. So you can’t use “fluffy is puffy” or “correct horse battery staple” anymore. And really, you can’t use any password that has any google results if you google it (in quotes).
This is just one small part of password security, especially compared to problems like people reusing passwords for more than one site. But if it’s learned correctly, it can help solve the problem by creating easier to remember passwords and encouraging people to create unique passwords for each site they visit.
permalink
« older posts