Mozilla Services Repositories

View on GitHub


Cornice provides helpers to build & document REST-ish Web Services with Pyramid, with decent default behaviors. It has validation features, and can be integrated with tools like Colander for complex validations.

Cornice can automatically generate Sphinx-based documentation for your APIs.


Circus is a process & socket manager. It can be used to monitor and control processes and sockets.

With Circus you can control a whole stack from the command-line or a web interface, and have real-time statistics.


Metlog is a service for applications to capture and inject arbitrary data into a backend storage suitable for out-of-band analytics and processing.

It's a client-server system that has almost no impact on your application performances. You can use it to send stats to Logstash for instance, using various transports like UDP and ZeroMQ.


Powerhose turns your CPU-bound tasks into I/O-bound tasks so your Python applications are easier to scale.

Powerhose is an implementation of the Request-Reply Broker pattern in ZMQ, with some extra features around.


Wat? Another message queue?

Given the proliferation of message queue's, one could be inclined to believe that inventing more is not the answer. Using an existing solution was attempted multiple times with most every existing message queue product.

The others failed (for our use-cases).

Queuey is meant to handle some unique conditions that most other message queue solutions either don't handle, or handle very poorly. Many of them for example are written for queues or pub/sub situations that don't require possibly longer term (multiple days) storage of not just many messages but huge quantities of queues.


Tokenlib is a generic support library for signed-token-based auth schemes. We are using it to generate HMAC tokens for our token-server project.


Vaurien is a TCP proxy which will let you simulate chaos between your application and a backend server.


Collect, aggregate, and visualize your data. That's the long-term goal at least, currently heka works as an agent deployed in nodes to collect data, and as an aggregator that agents can relay data into that will then save it to a permanent store (or multiple ones).