Friday, August 6, 2010

html2xml

While this library doesn't cover the full gamut of possible weirdness that HTML provides, it does handle a lot of the most obvious stuff. All of the following are accounted for:
  • Unclosed Tags:
    HTMLtoXML("Hello") == '
    Hello
    '
  • Empty Elements:
    HTMLtoXML("") == ''
  • Block vs. Inline Elements:
    HTMLtoXML("Hello John") == 'Hello 
    John
    '
  • Self-closing Elements:
    HTMLtoXML("Hello
    World") == '
    Hello
    World
    '
  • Attributes Without Values:
    HTMLtoXML("") == ''
Note: It does not take into account where in the document an element should exist. Right now you can put block elements in a head or th inside a p and it'll happily accept them. It's not entirely clear how the logic should work for those, but it's something that I'm open to exploring.

moo wave


During late October of last year, I reverse-engineered some of the features of the Google Wave client. Up until then, the only published specs were the federation protocols, which dealt with how multiple wave servers would use a common protocol to allow multiple users without a central authority and for the gadget and robot apis. Notably missing was a client/server api, for a user of specifically the google wave client, which did not yet support federation (and to date, preview still does not), and to browse/view the waves in one’s inbox without needing to switch to an entirely new provider. The first component was the ability to read waves. After that was accomplished, I tried to reverse engineer a more complex aspect of the protocol, which was the ability to search waves. I eventually realized that that component, search was part of a larger puzzle, which was the real-time BrowserChannel wire protocol which virtually all of wave was based. I made some progress, but near the end, I gave up in frustration. Luckily, someone else became interested in the same thing, and Silicon Dragon basically got search working.

dom indexer


There’s lots of compression systems for JS out there. There’s the really smart JS rewriter magical rhino-based ones like Closure and YUI. There’s the string-based ones, packer base62, huffman, and lz77. But of the latter category, they all rely on a sort of dictionary coder, where the dictionary (or huffman tree) needs to be sent alongside the compressed content.

gif


I’ve tried this before but it didn’t work. can’t do toDataURL(‘image/gif’), and the primitive GLIF library couldn’t do much so I never had the opportunity to test my gif-merging code that I had. But I’m at it again, this time, porting it from the AS3GIF library, an awesomely comprehensive bitmap to binary gif encoder that even supports LZW compression (and the patent has luckily expired. Yay!). AS3Gif is supposed to “play and encode animated GIFs”, but since web pages can usually natively play GIFs fine, it’s only a port of the GIFEncoder portions of the library. And it works really well. The rest of this post is copied from theGithub readme. Interesting how the w2_embed/anonybot embed post was a blog post turned into readme, this is a readme turned into blogpost. I’ll start with a link to the Github repo anyway:

OOH SHINY


I have yet to give up entirely on ShinyTouch, my experiment into creating a touch screen input system which requires virtually no setup. For people who haven’t read my posts from last year, it works because magically things look shinier when you look at it from an angle. And so if you mount a camera at an angle (It doesn’t need to be as extreme as the screenshot above), you end up seeing a reflection on the surface of the screen (this could be aided by a transparent layer of acrylic or by having a glossy display, but as you can see, mine are matte, but they still work). The other pretty obvious idea of ShinyTouch, is that on a reflective surface, especially observed from a non-direct angle, you can see that the distance from the reflection (I guess my eighth grade science teacher would say the “virtual image”) to the apparent finger, or “real image” is twice the distance from either to the surface of the display. In other words, the reflection gets closer to you when you get closer to the mirror. A webcam usually gives a two-dimensional bitmap of data (and one non-spatial dimension of time). This gives (after a perspective transform) the X and Y positions of the finger. But an important aspect of a touchscreen and what this technology is also capable of, a “zero-touch screen”, is a Z axis: the distance of the finger and the screen. A touchscreen has a binary Z-axis: touch or no touch. Since you can measure the distance between the apparent real finger and it’s reflection, you can get the Z-axis. That’s how ShinyTouch works.

Fluids


It’s always bothered me how some web sites have these fixed-width layouts, sometimes with insanely thin boxes allocated to content. The vast majority of my screen becomes this orange blob of text. Chrome’s visual appearance motto is “Content not chrome”. That doesn’t help if the content is being obscured by the presentation of the content (gopher might have solved that problem). Less chrome just means my eyes start to drown in +/- 1,732,405 pixels of orange. Even outside the extreme case, having a fixed-width layout isn’t efficient, and using something like Readability to only show the content removes the personality of the site or author, and only works on articles.

Lindemann–Weierstrass theorem

In mathematics, the Lindemann–Weierstrass theorem is a result that is very useful in establishing the transcendence of numbers. It states that if α1, ..., αn are algebraic numberswhich are linearly independent over the rational numbers Q, then eα1, ..., eαn are algebraically independent over Q; in other words the extension field Q(eα1, ..., eαn) has transcendence degree n over Q.