1. Entering proper HTML-formatted lists in Blogger sucks because there's no option to get at the pure HTML of your post. It throws in
<br />tags at carriage returns and surrounds the whole thing in a
</p>pair. So if you put in a proper
<ul>list you have to open up the next paragraph with a
<p>in order to get subsequent style to match earlier text even though with all those
<br />s it's kind of inconsistent. That's why this isn't a proper HTML list, and that kind of eats at me a bit. The same thing would apply to using any other HTML structures that implicitly end paragraphs. I think a table or definition list would do this. Probably lots of other stuff. Boo.
2. Uploading images to Blogger sucks. It uses a bunch of complicated shit instead of a nice simple
<img>, resizes your images in a way that I can't figure out how to directly edit, and inserts this mess at the beginning of the
<textarea>instead of at the cursor position. Boo.
3. Unix sucks. I'm logged into my computer under my username,
aldimond. I want to get some files off my nifty new digital camera. This is the first time I've done this. So I
emerge gphoto2and try to detect my camera. Ha, nice fucking try, geek boy. Thing is, it works as
root. It doesn't take long to figure out that I have to add myself to the group
plugdevto get access to certain hotplugged devices like the camera. Adding my user to the group is easy enough. But I can't *really* log into the group without logging out and back in. Which means restarting my X session. Which is not that big of a deal, but it's ugly. Obviously, I could start a new login shell. But that means that until I truly re-login I'll only be able to launch gphoto2-based programs from the new login shell; also, that's only really obvious to a Unix geek. Boo.
4. Image Magick's
-adaptive-resizeoption sucks. A little background here. In the 90s there were lots of shitty looking graphics on Teh 'Webs that were made by taking big graphics and downsampling them (that is, taking every nth pixel in each direction to make a 1/n-sized image. These images suffered from aliasing artifacts: jagged edges, lines of uneven width, ugly-looking curves. The solution is to apply an anti-aliasing filter before downsampling. An anti-aliasing filter blurs the image, removing "high-frequency" data that can't be expressed in the smaller version of the image. These days anti-aliasing techniques are widely used to create small icons and fonts from larger drawings, replacing the old technique of drawing sharper-looking small icons pixel-for-pixel; in fact, it's a major stylistic cue that separates old-looking software designed to look good on low-resolution monitors from new-looking software designed to look good on high-resolution monitors. The key to anti-aliasing is that for a given downsampling (image-scaling) ratio there is a mathematically determined frequency threshold. The anti-aliasing filter must (as near as possible) eliminate all frequencies above this threshold before downsampling.
There's another type of filter called "adaptive blur". It blurs the image, like an anti-aliasing filter (they're both basically low-pass filters, that is, they allow low frequencies and block high ones), but it tries not to blur edges. It might be useful if you had a noisy image with some text in it, and you wanted the noise blurred out but the text to remain sharp. Or if you had a close-up of a face with some little blemishes on it, and you wanted to smooth out the blemishes without blurring the features (you'd have to be careful to set the right thresholds in order to keep the features and smooth the blemishes; a human-directed technique would probably yield better results than a purely algorithmic one). At any rate, the result of an adaptive blur is that in high-detail areas of the image you still have lots of high frequency content. The "adaptive resize", as far as I can tell, simply applies an adaptive blur, then downsamples. Stupid. It blurs the areas that don't need to be blurred before downsampling and leaves untouched the areas that do need to be blurred. What's sad is that some poor slob is going to see "adaptive resize" and think, "Ooh, adaptive, that must be good," use it on pictures to post to the web, and think that the jaggies are supposed to be there, when really, it just makes his images look oh-so-Web-1.0. I cannot come up with a use case for this resize algorithm. It just makes no sense to me.
To prove my point, here is an image that's had a normal anti-aliasing filter applied before downsampling to 25% original size in each direction (my camera have many megapixel; seven, to be approximate): (you have to click on the image to actually see it full-size, because of point #2 above)
Here is the image that's been "adaptively resized". Notice the aliasing artifacts in the text on the signs, on the bottom bar of the gate, on the log steps to the right of the gate, on the wooden sign beyond the steps, and in what is really the most classic form of aliasing (classic in that it's most similar to the aliasing typically encountered in the audio world), the pattern in the white fence beyond the gate (similar to the patterns you see when someone on TV is wearing clothes with fine stripes... actually, sometimes on TV it looks like the anti-aliasing is done right for the luminance channel, but the color data, which is sampled at half the rate of the luminance data, has severe aliasing). These artifacts are plainly obvious, even though I'm using a CRT monitor at 1600x1200 (which is high enough on this monitor to significantly blur images). If you're using a lower resolution or an LCD panel they'll be even worse. Again, click on the image to see it in its true Web-1.0 glory.
EDIT: Just for the hell of it, here's a full-sized excerpt from the shot. Seven is a lot of megapixels, and with the full-sized image you can see what's really going on in that section of fence. Also, I don't have very steady hands but the camera has pretty good image stabilization, autofocus and even automatic contrast adjustments (I know next to nothing about photography, and even if I did I wasn't about to stand in that freezing canyon in a short-sleeved bike jersey fiddling with the camera, I just turned it on, aimed, shot and left). But you can see farther back on the fence that even the image stabilization can't work miracles, as the fence is blurred out. Well, actually, I have another shot, out the windshield of a car going 30MPH, that I consider to be a pretty damn miraculous feat of image stabilization, and I'll post that later at some point.