How to stream MP3 audio from Rdio

Recently, I was working on a personal project for which I wanted to stream audio from my Rdio (which, incidentally, is a great service) account. Unfortunately, the documented Rdio APIs don’t provide a way to do this, instead providing streaming through a Flash player for web apps or compiled libraries for iOS and Android. I spent a bit of time reverse-engineering pieces of the Rdio ecosystem to figure out how to do this and thought I’d post the resulting recipe for how to do this in Python in case anyone is interested.

First of all, apply for an Rdio API key.

Next, you’ll need to install a bit of software:

  • rdio-python package, which gives us access to the official Rdio REST API
  • PyAMF, which we use to make calls to the the un-documented Flash API
  • rtmpdump, which we use to stream the FLV content from the RTMP server. Note that versions after 2.1d don’t work, as they refuse to talk to the server which they deem to be not genuine Adobe
  • ffmpeg, which we will use to transcode the FLV audio into MP3

Once you’ve got all that installed, the process is relatively straight-forward: use the Rdio API to search for a track you want to download and grab a playback token for it; use the (un-documented) Flash API to retrieve parameters for constructing arguments to rtmpdump; run rtmpdump and pipe the output to ffmpeg to trans-code the FLV to MP3.

Here’s a proof-of-concept script that implements that process:

To use use, invoke it with your API key, API secret and the query you wish to use for searching for songs. It will perform the search and spit out shell commands for fetching and trans-coding the song.

I’ve wrapped this behavior up in a simple Python library and dumped it up on Github as pyrdiostream.

Posted in Python | 1 Comment

NodeJS and V8

[This is a response to this blog post by @olympum, with the rest of the thread being here and here.]

Bruno makes three assertions about the relationship between NodeJS and V8: that V8 was not designed as a server-side engine, that V8’s lack of threading inhibits adequate fault isolation, and that lack of explicit alignment between the V8 and NodeJS projects may lead to problems in the future.

V8 was not designed for server-side execution

I’m not really sure what this means, as Bruno doesn’t provide any details on what he’s concerned about.

When compared with the JVM, which offers distinct client and server modes affecting primarily JIT compilation garbage collection strategies, V8 is indeed less full featured. In fact, in probably virtually all respects, the JVM is more mature and featureful than V8. However, that does not imply that it’s a more appropriate choice for a server-side JavaScript runtime. After all, the JVM was designed from the ground up to run something vastly different than JavaScript. Though the list of alternative languages targeting the JVM is long and growing, it’s unclear to me that supporting these is a priority for the JVM team (the invokedynamic instruction not withstanding).

I would be very interested to see some benchmarks of workloads which are characteristic of server applications which compared Rhino, V8 and SpiderMonkey, et all. The results on arewefastyet.com are interesting, but don’t include the JVM. Unfortunately, I’m not enough of a JavaScript expert to opine on the relevance of the benchmarks used there to a workload more classically “server”. Perhaps someone else from the community could weigh in on this?

It would be wonderful if someone at Joyent or Yahoo! could contribute a representative benchmark (it’s open source!) and/or include a JVM-based engine to AWFY.

V8’s lack of threads inhibits fault isolation

This is a specious argument, IMO.

In a system which runs each request in its own thread, fault isolation is no better than a system which multiplexes requests over a single thread.

Let’s examine what happens when a fault occurs in a thread-per-request model. In languages without direct memory access (JavaScript, Java, etc), faults in threads are bubbled up to the top of the thread stack, say by virtue of an exception. Other threads continue to soldier on in their work, unaffected by this. I think this is what Bruno is referring to. Of course, it is possible for a problem in one thread to cause others to fail, particularly by exhausting available resources (e.g. file descriptors, memory, database connections, etc). In addition, bugs in application synchronization will cause other threads to deadlock, see corrupted state, etc.

In a multiplexed model (e.g. NodeJS), things are largely the same: a fault will bubble up to the top of the event loop as an exception where it will be dealt with. Other requests are unaffected. Significantly, the lack of  parallelism within the same address space suggests that this model may in fact be less likely to fail than a threaded model (no locking bugs!).

This argument boils down to which VM is more likely to crash of its own volition, taking all requests (be they in separate threads or not) with it. I don’t have any information one way or the other on whether the JVM is more reliable than V8. Bruno, do you?

The V8 team’s commitment to NodeJS is uncertain

While there doesn’t seem to be a formal commitment here, anecdotal evidence suggests that the V8 team is interested in seeing server-side JavaScript (and NodeJS in particular) succeed.

@jasonh makes several excellent points on the nature of this relationship (and Joyent’s) in his blog post.

Posted in NodeJS | 3 Comments

Benchmarking Web Socket servers with wsbench

Web Sockets are gaining traction as a realtime full-duplex communication channel, with several leading browsers (Chrome 5, Safari 5, Firefox 4, etc) having implemented support for some flavor of the protocol. Server support exists, but is not widespread, and is (entirely?) limited to specialized servers, with Socket.IO (based on NodeJS) being perhaps the most well-known. Finally, there appear to be no tools to do load or other testing on Web Socket services.

Enter wsbench, a benchmarking tool for draft76 Web Socket servers featuring

  • Ability to generate a high degree of load using a single client process.
  • Easily change the number and rate of connections and number and size of messages to send/receive with each connection.
  • Ability to target a specific port, path, and Web Socket protocol in the target server.
  • The core request/session engine is easily scriptable using JavaScript.

You’ll find that wsbench is quite easy to use.

Here, we open and close 1000 connections to a Web Socket server running on localhost, port 8000. We generate 50 connection requests per second.

    % time ./wsbench -c 1000 -r 50 ws://localhost:8000
    Success rate: 100% from 1000 connections

    real    0m20.379s
    user    0m1.340s
    sys     0m0.517s

We can see a few interesting things in the output above.

  • The error rate is tracked by wsbench and reported at the end. Errors include failure to open a connection, failure to send a message, failure to close the connection cleanly, etc.
  • The wsbench process running on a late-model MacBook is able to generate this load using less than 10% of the CPU.

The above benchmarking run only tested establishing connections. We didn’t send (or receive) any messages. By passing the -m 5 and -s 128 options to wsbench, we can send 5 128 byte messages per connection. Invoke wsbench with the -h option to see full usage:

usage: wsbench [options] <url>

Kick off a benchmarking run against the given ws:// URL.

We can execute our workload in one of two ways: serially, wherein each
connection is closed before the next is initiated; or in parallel, wherein
a desired rate is specified and connections initiated to meet this rate,
independent of the state of other connections. Serial execution is the
default, and parallel execution can be specified using the -r <rate>
option. Parallel execution is bounded by the total number of connections
to be made, specified by the -c option.

Available options:
  -c, --num-conns NUMBER   number of connections to open (default: 100)
  -h, --help               display this help
  -m, --num-msgs NUMBER    number of messages per connection (dfeault: 0)
  -p, --protocol PROTO     set the Web Socket protocol to use (default: empty)
  -r, --rate NUMBER        number of connections per second (default: 0)
  -s, --msg-size NUMBER    size of messages to send, in bytes (default: 32)
  -S, --session FILE       file to use for session logic (default: None)

Beyond performance testing, it can be useful to run load against a server continually to discover any resource leaks. This can be done by passing the -c 0 option — 0 connections is interpreted as a special “infinite” value. For example -c 0 -r 100 will open/close 100 connections per second indefinitely (or until wsbench is terminated with a ^C).

For information on how to script the core of wsbench, take a look at the Session Scripting section in the project page on GitHub. Because this tool is written entirely in JavaScript (using NodeJS), you’ll find that its easily extensible using a familiar language.

Posted in NodeJS, Web Sockets | Leave a comment

Using sendfile(2) with NodeJS

NodeJS provides an interface to using the sendfile(2) system call. Briefly, this system call allows the kernel to efficiently transport data from a file on-disk to a socket without round-tripping the data through user-space. This is one of the more important techniques that HTTP servers use to get good performance when serving static files.

Using this is slightly tricky in NodeJS, as the sendfile(2) call is not guaranteed to write all of a file’s data to the given socket. Just like the write(2) system call, it can declare success after only writing a portion of the file contents to the given socket. This is commonly the case with nonblocking sockets, as files larger than the TCP send window cannot be buffered entirely in the TCP stack. At this point, one must wait until some of this outstanding data has been flushed to the other end of the TCP connection.

Without further ado, the following code implements a TCP server that uses sendfile(2) to transfer the contents of a file to every client that connects.

This is doing a couple of intesting things

  • We use the synchronous version of the sendfile API. We do this because we don’t want to round-trip through the libeio thread pool.
  • We need to handle the EAGAIN error status from the sendfile(2) system call. NodeJS exposes this via a thrown exception. Rather than issuing another sendfile call right away, we wait until the socket is drained to try again (otherwise we’re just busy-waiting). It’s possible that the performance cost of generating and handling this exception is high enough that we’d be better off using the asynchronous version of the sendfile API.
  • We have to kick the write IOWatcher on the net.Stream instance ourselves to get the drain event to fire. This class only knows how to start the watcher itself when it notices a write(2) system call fail. Since we’re using sendfile(2) behind its back, we have to tell it to do this explicitly.
  • We notice that we’ve hit the end of our source file when sendfile(2) returns 0 bytes written.

Posted in NodeJS | Comments Off

More intelligent HTTP routing with NodeJS

Earlier this week, I wrote an article for YDN covering some of the reasons why one might want to run a multi-core HTTP server in NodeJS and some strategies for intelligently allocating connections to different workers. While routing based on characteristics of the TCP connection is useful, the approach outlined in that post has a serious shortcoming — we cannot actually read any data off of the socket when making these decisions. Doing so before passing off the file descriptor would cause the worker process to miss critical request data, choking the HTTP parser.

The above limitation precludes interrogating properties of the HTTP request itself (e.g. headers, query parameters, etc) to make routing decisions. In practice, there are a wide variety of use-cases where this is important: routing by cookie, vhost, path, query parameters, etc. In addition to cache affinity, this can provide some rudimentary forms of access control (e.g. by running each vhost in a process with a different UID or chroot(2) jail) or even QoS (e.g. by running each vhost in a process with its nice(2) value controlled).

Naively we could use NodeJS as a reverse HTTP proxy (and a pretty good one, at that), but the overhead of proxying every byte of every request is kind of a drag. As it turns out, we can use file descriptor passing to efficiently hand off each TCP connection to the appropriate worker once we’ve read enough of the request to make a routing decision. Thus, once the routing process delegates a connection to a worker, that worker owns it completely and the routing process has nothing more to do with it. No juggling connections, no proxying traffic, nothing. The trick is to do this in such a way that allows the routing process to parse as much of the request as it needs to while ensuring that all socket data remains available to the worker.

Step by step, we can do the following. Note that this does not work with HTTP/1.1 keep-alive, which multiplexes multiple requests over a single connection.

  1. Accept the TCP connection in the routing process
  2. Set up a data handler for the TCP connection that both retains a record of every byte received and uses a specially-constructed instance of the interruptible HTTP parser  (part of NodeJS core) to parse as much of the request as we need
  3. Once we’ve seen enough of the request, make a routing decision; here we just use the vhost specified in the request
  4. Hand off the file descriptor and all data seen thus far to the worker
  5. In the worker, construct a net.Stream connection around the received FD and use it to emit a synthetic ‘data’ event to replay data already read off of the socket by the routing process

It’s important to note that this does not rely on any modifications to the HTTP stack in the worker — just plane vanilla NodeJS. In order to do this, we have to recover from the fact that parsing the HTTP request in the routing process is destructive — it’s pulling bytes off of the socket that are not available to the worker once it takes over the TCP connection. To make sure that the worker doesn’t miss a single byte seen on the socket since its inception, we send over all data seen thus far and replay it in the worker using the synthetic ‘data’ event.

Keep in mind that this code is a prototype only (please don’t ship it — I’ve left out a lot of error handling for the sake of readability ;) , but I thought it was interesting enough to share with a broader audience. This implementation takes advantage of the task management and message passing facilities of node-webworker. It should run out of the box on node-v0.1.100.

Anyway, the key to this is being able to replay the socket’s data in the worker. You’ll notice in the gist above that we’re calling net.Stream.pause() once we’ve received all necessary data in the routing process. This ensures that this process doesn’t pull any more data off of the socket. If the kernel’s TCP stack receives more data for this socket after we’ve paused the stream, it will sit in the TCP receive buffer waiting for someone to read it. Once the worker process ingests the passed file descriptor and inserts it into its event loop, this newly-arrived data will be read. In a nutshell, we use the TCP stack itself to buffer data for us. If we really wanted to be clever, we might be able to use recv(2) with MSG_PEEK to look at data arriving on the socket while leaving it for the worker, but I’m not sure how this would play with the event loop.

Finally, while I think this is an interesting technique, it’s worth noting that a typical production NodeJS deployment would be behind an HTTP load balancer anyway, to front multiple physical hosts for availability if nothing else. Many load balancers can route requests based on a wide variety of characteristics like vhost, client IP, backend load, etc. However, if one doesn’t want/need a dedicated load balancer, or needs very application-specific logic to make routing decisions, I think the the above could be a useful tool.

Posted in NodeJS | Comments Off

Design of Web Workers for NodeJS

The node-webworker module aims to implement as much of the HTML5 Web Workers API as is practical and useful in the context of NodeJS. Extensions to the HTML5 API are provided where it makes sense (e.g. to allow file descriptor passing).

Motivation

Why bother to implement Web Workers for NodeJS? After all, child process support is already provided by the child_process module.

  • A set of standard (well, emerging standard anyway) platform-independency concurrency APIs is a useful abstraction. Particularly as HTML5 gains wider adoption and JavaScript developers are likely to familiar with Web Workers from doing browser development. The set of NodeJS primitives for managing processes, child_process provides a lot of utility, but is easily misunderstood by developers who have not developed for a UNIX platofrm before (e.g. why does kill() not kill my process?).In addition, the error reporting APIs in the Web Workers spec are more full-featured and JavaScript-specific than that provided natively by child_process (e.g. one can get a stack trace, etc).
  • Existing communication mechanisms with child processes involve communicating over stdin / stdout. Use of these built-in streams prevents sys.puts() and friends from working as expected. Further, these are opaque byte streams and require the application to implement their own framing logic to discern message boundaries.
  • HTML5 Shared Workers (also part of the same spec) provide a useful naming service for communicating with other workers by name. Without this, the application must maintain its own metadata for routing messages between workers. Note that shared workers are not yet implemented.

Design

The design that follows for Web Workers is motivated by a handful of underlying assumptions / philosophies:

  • Worker instances should be relatively long-lived. That is, it is not considered an important workload to be able to create and destroy thousands of workers as quickly as possible. Passing messages to existing workers to dispatch work items is favored over creating a new worker for each work item.
  • In the future, it will be desirable to run workers off-box, and to implement workers in other application frameworks / languages. This is particularly relevant in the choice of communication medium.
  • When practical, relevant standards and existing building blocks should be taken advantage of, particularly those that are geared towards JavaScript and/or HTTP. For example, this was one of the motivators for selecting Web Sockets as a messaging layer rather than rolling my own.

Worker processes

Each worker executes in its own self-contained node process rather than as a separate thread and V8 context within the master process.

The benefits of this approach include fault isolation (any worker running out of memory or triggering some buggy C++ code will not take down other workers); avoiding the complexity of managing multiple event loops in a single process; and typical OSes are more likely to schedule different processes on different CPUs (this may not always happen for multiple threads within the same process), allowing the application to utilize multiple CPUs.

Of course, there are drawbacks including the cost of context switching between workers being more expensive when using a process-per-worker model than it would be in a thread-per-worker model; passing messages between processes typically requires a data copy and always requires serializing data; and the overhead of spawning a new process.

The worker context

Each worker is launched by lib/webworker-child.js, which is handed paths to the UNIX socket to use for communication with the parent process (see below) and the worker application itself.

This script is passed to node as the entry point for the process and is responsible for constructing a V8 script context populated with bits relevant to the Web Worker API (e.g. the postMessage(), close() and location primitives, etc). This also establishes communication with the parent process and wires up the message send/receive listeners. It’s important to note that all of this happens in a context entirely separate from the one in which the worker application will be executing; the worker gets a seemingly plane-Jane Node runtime with the Web Worker API bolted on. The worker application doesn’t need to require() additional libraries or anything.

Inter-Worker communication

The Web Workers spec describes a simple message passing API.

Under the covers, this is implemented by connecting each dedicated worker to its parent process with a UNIX domain socket. This is lower overhead than TCP, and allows for UNIX goodies like file descriptor passing. Each master process creates dedicated UNIX socket for each worker the path /tmp/node-webworker-<pid>/<worker-id>, where <pid> is the PID of the process doing the creating, and <worker-id> is an ID of the worker being created. Although muddying up the filesystem namespace doesn’t thrill me, this makes the implementation easier than listening on a single socket for all workers.

Message passing is done over this UNIX socket by negotiating an HTML5 Web Socket connection over this transport. This is done to provide a reasonably-performant standards-based message framing implementation and to lay the groundwork or communicating with off-box workers via HTTP over TCP, which may be implemented in another application stack entirely (e.g. Java, etc). The overhead of negotiating and maintaining the Web Socket connection is 1 round trip for handshaking and the overhead of maintaining HTTP state objects (http_parser and such). The handshaking overhead is not considered an undue burden given that workers are expected to be relatively long-lived and the HTTP state overhead considered small.

Message format

The format of the messages themselves is JSON, serialized using JSON.stringify() and de-serialized using JSON.parse(). Significantly, the use of a framing protocol allows the Web Workers implementation to wait for an entire, complete JSON blob to arrive before invoking JSON.parse(). Although not implemented, it should be possible to negotiate supported content encoding (e.g. to support MsgPack, BERT, etc) when setting up the Web Socket connection. The built-in JSON object is relatively performant though node-msgpack is quite a bit faster, particularly when de-serializing.

Each object passed to postMessage() is wrapped in an array like so [<msg-type>, <object>]. This allows the receiving end of the message to distinguish control messages (CLOSE, ERROR, etc) from user-initiated messages.

Sending file descriptors

As mentioned above, this Web Workers implementation can take advantage of node’s ability to send file descriptors using UNIX sockets. As a nonstandard extension to the postMessage(obj [,<fd>]) API, an optional file descriptor can be specified. On symmetric API extension was made on the receiving end, where the onmessage(obj [,<fd>]) handler is passed a fd parameter if a file descriptor was received along with the specified message.

Unfortunately, UNIX sockets seem to allow file descriptors to arrive out-of-band with respect to the data payload with which they were sent. To tie a received file descriptor to the message with which it was sent, all messages are wrapped in an array of the form [<fd-seqno>, <obj>], where <obj> is the object passed to postMessage(). The <fd-seqno> parameter starts off at 0 and is incremented for every file descriptor sent (the first file descriptor sent has a <fd-seqno> of 1). This provides the receiving end with enough metadata to tie out-of-band descriptors together with their originating message.

Posted in NodeJS | Comments Off