Autopush
Mozilla Push server and Push Endpoint utilizing Rust, Actix, and a key/value data store.
This is the fourth generation of Push server built in Mozilla Services, and is built to support the the W3C Push spec.
For how to read and respond to autopush error codes, see Errors.
For an overview of the Mozilla Push Service and where autopush fits in,
see the Mozilla Push Service architecture
diagram.
This push service uses websockets to talk to Firefox, with a Push
endpoint that implements the WebPush
standard for its http
API.
Autopush APIs
For developers writing mobile applications in Mozilla, or web developers using Push on the web with Firefox.
Running Autopush
If you just want to run autopush, for testing Push locally with Firefox, or to deploy autopush to a production environment for Firefox.
Developing Autopush
For developers wishing to work with the latest autopush source code, it's recommended that you first familiarize yourself with running Autopush before proceeding.
Source Code
All source code is available on github under autopush.
- autoconnect - WebSocket server for desktop UAs
- autoconnect_common - Common functions for autoconnect
- autoconnect_settings - Settings and configuration
- autoconnect_web - HTTP functions
- autoconnect_ws - WebSocket functions
- autoconnect_ws_sm - WebSocket state machine
- autoendpoint - HTTP server for publication and mobile
- autopush_common - Common functions for autoconnect and autoendpoint
We are using rust for a number of optimizations
and speed improvements. These efforts are ongoing and may be subject to
change. Unfortunately, this also means that formal documentation is not
yet available. You are, of course, welcome to review the code located in
autopush-rs
.
Changelog
Bugs/Support
Bugs should be reported on the autopush github issue tracker.
autopush Endpoints
autopush is automatically deployed from master to a dev environment for testing, a stage environment for tagged releases, and the production environment used by Firefox/FirefoxOS.
dev
- Websocket: wss://autoconnect.dev.mozaws.net/
- Endpoint: https://updates-autopush.dev.mozaws.net/
stage
- Websocket: wss://autoconnect.stage.mozaws.net/
- Endpoint: https://updates-autopush.stage.mozaws.net/
production
- Websocket: wss://push.services.mozilla.com/
- Endpoint: https://updates.push.services.mozilla.com/
Reference
License
autopush
is offered under the Mozilla Public License 2.0.
Architecture
Overview
For Autopush, we will focus on the section in the above diagram in the Autopush square.
Autopush consists of two types of server daemons:
autoconnect
(connection node) - Run a connection node. These handle large amounts of Firefox user agents using the Websocket protocol.
autoendpoint
(endpoint node) - Run an endpoint node. These provide a WebPush
HTTP API for Application Servers <AppServer>
to HTTP POST messages to endpoints.
To have a running Push Service for Firefox, both of these server daemons must be running and communicating with the same Storage system and tables.
Endpoint nodes handle all Notification
POST requests, looking up in
storage to see what Push server the UAID is connected to. The Endpoint
nodes then attempt delivery to the appropriate connection node. If the
UAID is not online, the message may be stored in Storage in the
appropriate message table.
Push connection nodes accept websocket connections (this can easily be HTTP/2 for WebPush), and deliver notifications to connected clients. They check Storage for missed notifications as necessary.
There will be many more Push servers to handle the connection node, while more Endpoint nodes can be handled as needed for notification throughput.
Cryptography
The HTTP endpoint URL's generated by the connection nodes contain
encrypted information, the UAID
and Subscription
to send the message
to. This means that they both must have the same CRYPTO_KEY
supplied
to each.
See autopush_common::endpoint::make_endpoint(...)
for the endpoint
URL generator.
If you are only running Autopush locally, you can skip to running
as
later topics in this document apply only to developing or production
scale deployments of Autopush.
WebPush Sort Keys
Messages for WebPush are stored using a partition key + sort key, originally the sort key was:
CHID : Encrypted(UAID: CHID)
The encrypted portion was returned as the Location to the Application Server. Decrypting it resulted in enough information to create the sort key so that the message could be deleted and located again.
For WebPush Topic messages, a new scheme was needed since the only way to locate the prior message is the UAID + CHID + Topic. Using Encryption in the sort key is therefore not useful since it would change every update.
The sort key scheme for WebPush messages is:
VERSION : CHID : TOPIC
To ensure updated messages are not deleted, each message will still have an update-id key/value in its item.
Non-versioned messages are assumed to be original messages from before this scheme was adopted.
VERSION
is a 2-digit 0-padded number, starting at 01 for Topic messages.
Storage Tables
Autopush uses Google Cloud Bigtable as a key / value data storage system.
DynamoDB (legacy)
Previously, for DynamoDB, Autopush used a single router and messages table. On startup, Autopush created these tables. For more information on DynamoDB tables, see http://docs.aws.amazon.com/amazondynamodb/latest/gettingstartedguide/Welcome.html
Google Bigtable
For Bigtable, Autopush presumes
that the table autopush
has already been allocated, and that the following Cell Families
have been created:
message
with a garbage collection policy set to max age of 1 secondrouter
with a garbage collection policy set to max versions of 1message_topic
with a garbage collection policy set to max versions of 1 or max age of 1 second
the following BASH script may be a useful example. It presumes that the google-cloud-sdk has already been installed and initialized.
PROJECT=test &&\
INSTANCE=test &&\
DATABASE=autopush &&\
MESSAGE=message &&\
TOPIC=message_topic &&\
ROUTER=router &&\
cbt -project $PROJECT -instance $INSTANCE createtable $DATABASE && \
cbt -project $PROJECT -instance $INSTANCE createfamily $DATABASE $MESSAGE && \
cbt -project $PROJECT -instance $INSTANCE createfamily $DATABASE $TOPIC && \
cbt -project $PROJECT -instance $INSTANCE createfamily $DATABASE $ROUTER && \
cbt -project $PROJECT -instance $INSTANCE setgcpolicy $DATABASE $MESSAGE maxage=1s && \
cbt -project $PROJECT -instance $INSTANCE setgcpolicy $DATABASE $TOPIC maxversions=1 or maxage=1s && \
cbt -project $PROJECT -instance $INSTANCE setgcpolicy $DATABASE $ROUTER maxversions=1
Please note, this document will refer to the message
table and the router
table for
legacy reasons. Please consider these to be the same as the message
and router
cell
families.
Router Table Schema
The router table contains info about how to send out the incoming message.
DynamoDB (legacy)
The router table stored metadata for a given UAID
as well as which
month table should be used for clients with a router_type
of
webpush
.
For "Bridging", additional bridge-specific data may be stored in the
router record for a UAID
.
uaid | partition key - UAID |
router_type | Router Type (See [autoendpoint::extractors::routers::RouterType ]) |
node_id | Hostname of the connection node the client is connected to. |
connected_at | Precise time (in milliseconds) the client connected to the node. |
last_connect | global secondary index - year-month-hour that the client has last connected. |
curmonth | Message table name to use for storing WebPush messages. |
Autopush DynamoDB used an optimistic deletion policy for node_id
to avoid
delete calls when not needed. During a delivery attempt, the endpoint
would check the node_id
for the corresponding UAID
. If the client was
not connected, it would clear the node_id
record for that UAID
in the
router table.
If an endpoint node discovered during a delivery attempt that the
node_id
on record did not have the client connected, it would clear
the node_id
record for that UAID
in the router table.
The last_connect
was a secondary global index to allow for
maintenance scripts to locate and purge stale client records and
messages.
Clients with a router_type
of webpush
drain stored messages from the
message table named curmonth
after completing their initial handshake.
If the curmonth
entry was not the current month then it updated it to
store new messages in the latest message table after stored message
retrieval.
Bigtable
The Router
table is identified by entries with just the UAID
, containing cells
that are of the router
family. These values are similar to the ones listed above.
Key | UAID |
router_type | Router Type (See [autoendpoint::extractors::routers::RouterType ]) |
node_id | Hostname of the connection node the client is connected to. |
connected_at | Precise time (in milliseconds) the client connected to the node. |
last_connect | year-month-hour that the client has last connected. |
Message Table Schema
The message table stores messages for users while they're offline or unable to get immediate message delivery.
DynamoDB (legacy)
uaid | partition key - UAID |
chidmessageid | sort key - CHID + Message-ID . |
chids | Set of CHID that are valid for a given user. This entry was only present in the item when chidmessageid is a space. |
data | Payload of the message, provided in the Notification body. |
headers | HTTP headers for the Notification. |
ttl | Time-To-Live for the Notification. |
timestamp | Time (in seconds) that the message was saved. |
updateid | UUID generated when the message was stored to track if the message was updated between a client reading it and attempting to delete it. |
The subscribed channels were stored as chids
in a record stored with a
blank space set for chidmessageid
. Before storing or delivering a
Notification
a lookup was done against these chids
.
Bigtable
Key | UAID #CHID #Message-ID |
data | Payload of the message, provided in the Notification body. |
headers | HTTP headers for the Notification. |
ttl | Time-To-Live for the Notification. |
timestamp | Time (in seconds) that the message was saved. |
updateid | UUID generated when the message is stored to track if the message is updated between a client reading it and attempting to delete it. |
Autopush used a table rotation system, which is now legacy. You may see some references to this as we continue to remove it.
Push Characteristics
- When the Push server has sent a client a notification, no further notifications will be accepted for delivery (except in one edge case). In this state, the Push server will reply to the Endpoint with a 503 to indicate it cannot currently deliver the notification. Once the Push server has received ACKs for all sent notifications, new notifications can flow again, and a check of storage will be done if the Push server had to reply with a 503. The Endpoint will put the Notification in storage in this case.
- (Edge Case) Multiple notifications can be sent at once, if a notification comes in during a Storage check, but before it has completed.
- If a connected client is able to accept a notification, then the Endpoint will deliver the message to the client completely bypassing Storage. This Notification will be referred to as a Direct Notification vs. a Stored Notification.
- (DynamoDb (legacy)) Provisioned Write Throughput for the Router table determines how many connections per second can be accepted across the entire cluster.
- (DynamoDb (legacy)) Provisioned Read Throughput for the Router table and Provisioned Write throughput for the Storage table determine maximum possible notifications per second that can be handled. In theory notification throughput can be higher than Provisioned Write Throughput on the Storage as connected clients will frequently not require using Storage at all. Read's to the Router table are still needed for every notification, whether Storage is hit or not.
- (DynamoDb (legacy)) Provisioned Read Throughput on for the Storage table is an important factor in maximum notification throughput, as many slow clients may require frequent Storage checks.
- If a client is reconnecting, their Router record will be old. Router records have the node_id cleared optimistically by Endpoints when the Endpoint discovers it cannot deliver the notification to the Push node on file. If the conditional delete fails, it implies that the client has during this period managed to connect somewhere again. It's entirely possible that the client has reconnected and checked storage before the Endpoint stored the Notification, as a result the Endpoint must read the Router table again, and attempt to tell the node_id for that client to check storage. Further action isn't required, since any more reconnects in this period will have seen the stored notification.
Push Endpoint Length
The Endpoint URL may seem excessively long. This may seem needless and
confusing since the URL consists of the unique User Agent Identifier
(UAID) and the Subscription Channel Identifier (CHID). Both of these are
class 4 Universally Unique Identifiers (UUID) meaning that an endpoint
contains 256 bits of entropy (2 * 128 bits). When used in string
format, these UUIDs are always in lower case, dashed format (e.g.
01234567-0123-abcd-0123-0123456789ab
).
Unfortunately, since the endpoint contains an identifier that can be easily traced back to a specific device, and therefore a specific user, there is the risk that a user might inadvertently disclose personal information via their metadata. To prevent this, the server obscures the UAID and CHID pair to prevent casual determination.
As an example, it is possible for a user to get a Push endpoint for two different accounts from the same User Agent. If the UAID were disclosed, then a site may be able to associate a single user to both of those accounts. In addition, there are reasons that storing the UAID and CHID in the URL makes operating the server more efficient.
Naturally, we're always looking at ways to improve and reduce the length of the URL. This is why it's important to store the entire length of the endpoint URL, rather than try and optimize in some manner.
DynamoDB Message Table Rotation (legacy)
Note: this section does not apply to our BigTable database backend. All documentation below is deprecated and left for historical purposes.
As of version 1.45.0, message table rotation can be disabled. This is
because DynamoDB now provides automatic entry expiration. This is
controlled in our data by the "expiry" field. (Note, field
expiration is only available in full DynamoDB, and is not replicated
with the mock DynamoDB API provided for development.) The following
feature is disabled with the no_table_rotation
flag set in the
autopush_shared.ini
configuration file.
If table rotation is disabled, the last message table used will become 'frozen' and will be used for all future messages. While this may not be aesthetically pleasing, it's more efficient than copying data to a new, generic table. If it's preferred, service can be shut down, previous tables dropped, the current table renamed, and service brought up again.
Message Table Rotation information (legacy)
Note: this section does not apply to our BigTable database backend. All documentation below is deprecated and left for historical purposes.
To avoid costly table scans, autopush used a rotating message and router table. Clients that hadn't connected in 30-60 days would have their router and message table entries dropped and needed to re-register. Tables were post-fixed with the year/month they were meant for, i.e. : messages_2015_02 Tables must have been created and had their read/write units properly allocated by a separate process in advance of the month switch-over as autopush nodes would assume the tables already existed. Scripts [were provided(https://github.com/mozilla-services/autopush/blob/master/maintenance.py) that could be run weekly to ensure all necessary tables were present, and tables old enough were dropped.
Within a few days of the new month, the load on the prior months table would fall as clients transition to the new table. The read/write units on the prior month may then be lowered.
DynamoDB Rotating Message Table Interaction Rules (legacy)
Due to the complexity of having notifications spread across two tables, several rules were used to avoid losing messages during the month transition.
The logic for connection nodes is more complex, since only the connection node knows when the client connects, and how many messages it has read through.
When table rotation was allowed, the router table used the curmonth
field to indicate the last month the client had read notifications
through. This was independent of the last_connect since it was possible
for a client to connect, fail to read its notifications, then reconnect.
This field was updated for a new month when the client connected after
it had ack'd all the notifications out of the last month.
To avoid issues with time synchronization, the node the client is connected to acts as the source of truth for when the month has flipped over. Clients are only moved to the new table on connect, and only after reading/acking all the notifications for the prior month.
Rules for Endpoints
-
Check the router table to see the current_month the client is on.
-
Read the chan list entry from the appropriate month message table to see if its a valid channel.
If its valid, move to step 3.
-
Store the notification in the current months table if valid. (Note that this step does not copy the blank entry of valid channels)
Rules for Connection Nodes
After Identification:
-
Check to see if the current_month matches the current month, if it does then proceed normally using the current months message table.
If the connection node month does not match stored current_month in the clients router table entry, proceed to step 2.
-
Read notifications from prior month and send to client.
Once all ACKs are received for all the notifications for that month proceed to step 3.
-
Copy the blank message entry of valid channels to the new month message table.
-
Update the router table for the current_month.
During switchover, only after the router table update are new commands from the client accepted.
Handling of Edge Cases:
- Connection node gets more notifications during step 3, enough to buffer, such that the endpoint starts storing them in the previous current_month. In this case the connection node will check the old table, then the new table to ensure it doesn't lose message during the switch.
- Connection node dies, or client disconnects during step 3/4. Not a problem as the reconnect will pick it up at the right spot.
Installing
System Requirements
Autopush requires the following to be installed. Since each system has different methods and package names, it's best to search for each package.
- Rust 1.66 (or later)
-
build-essential (a meta package that includes):
- autoconf
- automake
- gcc
- make
-
(for integration testing) python3 and the python3 development (header files)
-
libffi development
-
openssl development
-
python3 virtualenv
-
git
For instance, if installing on a Fedora or RHEL-like Linux (e.g. an Amazon EC2 instance):
$ sudo yum install autoconf automake gcc make libffi-devel \
openssl-devel pypy pypy-devel python3-virtualenv git -y
Or a Debian based system (like Ubuntu):
$ sudo apt-get install build-essential libffi-dev \
libssl-dev pypy-dev python3-virtualenv git --assume-yes
Check-out the Autopush Repository
You should now be able to check-out the autopush repository.
$ git clone https://github.com/mozilla-services/autopush-rs.git
Alternatively, if you're planning on submitting a patch/pull-request to autopush then fork the repo and follow the Github Workflow documented in Mozilla Push Service - Code Development.
Rust and Cargo
You can install Rust and Cargo (if not already present on your computer) by following the steps at rustup.rs, or by installing Rust from your systems package management system. Please note, that currently we require a minimum of rust 1.68.
You can find what version of rust you are running using
rustc --version
You can update to the latest version of rust by using
rustup update
You can build all applications by running
cargo build
Scripts
After installation of autopush the following command line utilities are
available in the virtualenv bin/
directory:
autopush | Runs a Connection Node |
autoendpoint | Runs an Endpoint Node |
endpoint_diagnostic | Runs Endpoint diagnostics |
autokey | Endpoint encryption key generator |
If you are planning on using Google Cloud Bigtable, you will need to configure
your GOOGLE_APPLICATION_CREDENTIALS
. See How Application Default Credentials works
Building Documentation
To build the documentation, you will need additional packages installed:
cargo install mdbook
You can then build the documentation:
cd docs
make html
Local Storage emulation
Local storage can be useful for development and testing. It is not advised to use emulated storage for any form of production environment, as there are strong restrictions on the emulators as well as no guarantee of data resiliance.
Specifying storage is done via two main environment variables / configuration settings.
db_dsn
This specifies the URL to the storage system to use. See following sections for details.
db_settings
This is a serialized JSON dictionary containing the storage specific settings.
Using Google Bigtable Emulator locally
Google supplies a Bigtable emulator as part of their free SDK. Install the Cloud CLI, per their instructions, and then start the Bigtable emulator by running
gcloud beta emulators bigtable start
By default, the emulator is started on port 8086. When using the emulator, you will need to set an environment variable that contains the address to use.
export BIGTABLE_EMULATOR_HOST=localhost:8086
Bigtable is memory only and does not maintain information between restarts. This means that you will need to create the table, column families, and policies.
You can initialize these via the setup_bt.sh
script which uses the cbt
command from the SDK:
scripts/setup_bt.sh
The db_dsn
to access this data store with Autopendpoint would be:
grpc://localhost:8086
The db_setings
contains a JSON dictionary indicating the names of the message and router families, as well as the path to the table name.
For example, if we were to use the values from the initializion script above (remember to escape these values for whatever sysetm you are using):
{"message_family":"message","message_topic_family":"message_topic","router_family":"router","table_name":"projects/test/instances/test/tables/autopush"}
Using the "Dual" storage configuration (legacy)
Dual is a temporary system to be used to transition user data from one system to another. The "primary" system is read/write, while the "secondary" is read only, and is only read when a value is not found in the "primary" storage.
Dual's DSN Is dual
. All connection information is stored in the db_settings
parameter. (Remember to escape these values for whatever system you are using):
{"primary":{"db_settings":"{\"message_family\":\"message\",\"router_family\":\"router\",\"table_name\":\"projects/test/instances/test/tables/autopush\"}","dsn":"grpc://localhost:8086"},"secondary":{"db_settings":"{\"message_table\":\"test_message\",\"router_table\":\"test_router\"}","dsn":"http://localhost:8000/"}}
Configuring for Third Party Bridge services:
Working with mobile devices can present many challenges. One very significant one deals with how mobile devices save battery very aggressively. Using your mobile devices CPU and radio both require considerable battery power. This means that maintaining something like a constant connection to a remote server, or regularly "pinging" a server can cause your device to wake, spin up the CPU and use the radio to connect to local wifi or cellular networks. This may cause your application to be quickly flagged by the operating system and either aggressively deactivated, or be flagged for removal.
Fortunately, the major mobile OS providers offer a way to send messages to devices on their networks. These systems operate similarly to the way Push works, but have their own special considerations. In addition, we want to make sure that messages remain encrypted while passing through these systems. The benefit of using these sorts of systems is that message delivery is effectively "free", and apps that use these systems are not flagged for removal.
Setting up the client portion of these systems is outside the scope of this document, however the providers of these networks have great documentation that can help get you started.
As a bit of shorthand, we refer to these proprietary mobile messaging systems as "bridge" systems, since they act as a metaphorical bridge between our servers and our applications.
How we connect and use these systems is described in the following documents:
Configuring for the APNS bridge
APNS requires a current Apple Developer License for the platform or
platforms you wish to bridge to (e.g. iOS, desktop, etc.). Once that
license has been acquired, you will need to create and export a valid
.p12
type key file. For this document, we
will concentrate on creating an iOS certificate.
Create the App ID
First, you will need an Application ID. If you do not already have an application, you will need to create an application ID. For an App ID to use Push Notifications, it must be created as an Explicit App ID. Please be sure that under "App Services" you select Push Notifications. Once these values are set, click on [Continue].
Confirm that the app settings are as you desire and click [Register], or click [Back] and correct them. Push Notifications should appear as "Configurable".
Create the Certificate
Then Create a new certificate. Select "Apple Push Notification service SSL" for either Development or Production, depending on intended usage of the certificate. "Development", in this case, means a certificate that will not be used by an application released for general public use, but instead only for personal or team development. This is also known as a "Sandbox" application and will require setting the "use_sandbox" flag. Once the preferred option is selected, click [Continue].
Select the App ID that matches the Application that will use Push Notifications. Several Application IDs may be present, be sure to match the correct App ID. This will be the App ID which will act as the recipient bridge for Push Notifications. Select [Continue].
Follow the on-screen instructions to generate a CSR file, click [Continue], and upload the CSR.
Download the newly created iOSTeam_Provisioning_Profile_.mobileprovision keyset, and import it into your KeyChain Access app.
Exporting the .p12 key set
In KeyChain Access, for the login keychain, in the
Certificates category, you should find an Apple Push Services:
*your AppID* certificate. Right click on this certificate and select
Export "Apple Push Services:".... Provide the file with a reasonably
unique name, such as Push_Production_APNS_Keys.p12
, so that you can
find it easily later. You may wish to secure these keys with a password.
Converting .p12 to PEM
You will need to convert the .p12 file to PEM format. openssl can perform these steps for you. A simple script you could use might be:
#!/bin/bash
echo Converting $1 to PEM
openssl pkcs12 -in $1 -out $1_cert.pem -clcerts -nokeys
openssl pkcs12 -in $1 -out $1_key.pem -nocerts -nodes
This will divide the p12 key into two components that can be read by the autopush application.
Sending the APNS message
The APNS post message contains JSON formatted data similar to the following:
{
"aps": {
"content-available": 1
},
"key": "value",
...
}
aps is reserved as a sub-dictionary. All other key: value slots are open.
In addition, you must specify the following headers:
- apns-id: A lowercase, dash formatted UUID for this message.
- apns-priority: Either 10 for Immediate delivery or 5 for delayable delivery.
- apns-topic: The bundle ID for the recipient application. This must
match the bundle ID of the AppID used to create the "Apple Push
Services:..." certificate. It usually has the format of
com.example.ApplicationName
. - apns-expiration: The timestamp for when this message should expire in UTC based seconds. A zero ("0") means immediate expiration.
Handling APNS responses
APNS returns a status code and an optional JSON block describing the error. A list of these responses are provided in the APNS documentation
Note, Apple may change the document locaiton without warning. you may be able to search using DeviceTokenNotForTopic or similar error messages.
Configuring for Google GCM/FCM
Google's Firebase Cloud Messaging (FCM) superceded Google Cloud Messaging (GCM). The server setup process is well documented, with autopush using the FTM HTTP v1 API protocol.
Authorization
FCM requires a server authentication key. These keys are specified in the autoendpoint
configuration as the environment variable AUTOEND__FCM__CREDENTIALS
(or configuration file option [fcm] server_credentials
) as a serialized JSON structure containing the bridge project ID and either the contents of the credential file generated by the Service Account generated key, or a path to the file containing the credentials
As an example, let's assume we create a Push recipient application with a Google Cloud Project ID of random-example-123
. Since our clients could be using various alternative bridges (for testing, stage, etc.) we would use an alternate identifier for the instance_id
If we saved the sample credentials we received from Google to ./keys/credentials.json
, it might look like:
{
"type": "service_account",
"project_id":"random-example-123",
"private_key_id": "abc...890",
"private_key": "---... ---",
"client_email": "...",
"client_id": "...",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/...",
"client_x509_cert_url": "..."
}
Autoendpoint Configuration
If we wished to "in-line" the credentials for an instance_id of default
, our environment variable would look like:
AUTOEND__FCM__CREDENTIALS='{"default":{"project_id":"random-example-123","credential":"{\"type\": \"service_account\",\"project_id\":\"random-example-123\",\"private_key_id\": \"abc..890\",\"private_key\": \"---...---\",\"client_email\": \"...\",\"client_id\": \"...\",\"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\"token_uri\": \"https://oauth2.googleapis.com/token\",\"auth_provider_x509_cert_url\": \"https://www.googleapis.com/...\",\"client_x509_cert_url\":\"...\"}"}'
We could also just point to the relative path of the file using:
AUTOEND__FCM__CREDENTIALS='{"default":{"project_id":"random-example-123","credential":"keys/credentials.json"}'
Only autoendpoint
uses the bridge interface, so you do not need to specify this configuration for autoconnect
.
Running Autopush
Overview
To run Autopush, you will need to run at least one connection node, one endpoint node, and a local storage system. The prior section on Autopush architecture documented these components and their relation to each other.
The recommended way to run the latest development or tagged Autopush release is to use docker. Autopush has docker images built automatically for every tagged release and when code is merged to master.
If you want to run the latest Autopush code from source then you should
follow the developing
instructions.
The instructions below assume that you want to run Autopush with a local Bigtable emulator for testing or local verification. The docker containers can be run on separate hosts as well.
Setup
#TODO rebuild the docker-compose.yaml files based off of syncstorage ones.
-
rebuild docker-componse.yaml
- initialize tables
- [] define steps here
Generate a Crypto-Key
As the cryptography
section notes, you will need a CRYPTO_KEY
to run
both of the Autopush daemons. To generate one with the docker image:
$ docker run -t -i mozilla-services/autopush-rs autokey
CRYPTO_KEY="hkclU1V37Dnp-0DMF9HLe_40Nnr8kDTYVbo2yxuylzk="
Store the key for later use (including any trailing =
).
Start Autopush
Once you've completed the setup and have a crypto key, you can run a local Autopush with a single command:
$ CRYPTO_KEY="hkclU1V37Dnp-0DMF9HLe_40Nnr8kDTYVbo2yxuylzk=" docker-compose up
docker-compose will start up three containers, two for each Autopush daemon, and a third for storage.
By default, the following services will be exposed:
ws://localhost:8080/
- websocket server
http://localhost:8082/
- HTTP Endpoint Server (See the HTTP API)
You could set the CRYPTO_KEY
as an environment variable if you are
using Docker. If you are running these programs "stand-alone" or outside
of docker-compose, you may setup a more thorough configuration using
config files as documented below.
Note: The load-tester can be run against it or you can run Firefox with the local Autopush per the
test-with-firefox
docs.
Configuration
Autopush can be configured in three ways; by option flags, by
environment variables, and by configuration files. Autopush uses three
configuration files. These files use standard ini
formatting similar to the following:
# A comment description
;a_disabled_option
;another_disabled_option=default_value
option=value
Options can either have values or act as boolean flags. If the option is a flag it is either True if enabled, or False if disabled. The configuration files are usually richly commented, and you're encouraged to read them to learn how to set up your installation of autopush.
Note: any line that does not begin with a \#
or ;
is
considered an option line. if an unexpected option is present in a
configuration file, the application will fail to start.
Configuration files can be located in:
- in the /etc/ directory
- in the configs subdirectory
- in the $HOME or current directory (prefixed by a period '.')
The three configuration files are:
- autopush_connection.ini - contains options for use by the
websocket handler. This file's path can be specified by the
--config-connection
option. - autopush_shared.ini - contains options shared between the
connection and endpoint handler. This file's path can be specified
by the
--config-shared
option. - autopush_endpoint.ini - contains options for the HTTP handlers
This file's path can be specified by the
--config-endpoint
option.
Sample Configurations
Three sample configurations, a base config, and a config for each Autopush daemon can be found at https://github.com/mozilla-services/autopush/tree/master/config
These can be downloaded and modified as desired.
Config Files with Docker
To use a configuration file with docker,
ensure the config files are accessible to the user running
docker-compose. Then you will need
to update the docker-compose.yml
to use the config files and make them
available to the appropriate docker containers.
Mounting a config file to be available in a docker container is fairly
simple, for instance, to mount a local file autopush_connection.ini
into a container as /etc/autopush_connection.ini
, update the
autopush
section of the docker-compose.yml
to be:
volumes:
- ./boto-compose.cfg:/etc/boto.cfg:ro
- ./autopush_connection.ini:/etc/autopush_connection.ini
Autopush automatically searches for a configuration file at this location so nothing else is needed.
Note: The docker-compose.yml
file
provides a number of overrides as environment variables, such as CRYPTO_KEY
. If these values are not defined,
they are submitted as ""
, which will
prevent values from being read from the config files. In the case of
CRYPTO_KEY
, a new, random key is
automatically generated, which will result in existing endpoints no
longer being valid. It is recommended that for docker based images, that
you *always* supply a CRYPTO_KEY
as
part of the run command.
Notes on GCM/FCM support
Note: GCM is no longer supported by Google. Some legacy users can still use GCM, but it is strongly recommended that applications use FCM.
Autopush is capable of routing messages over Firebase Cloud Messaging for android devices. You will need to set up a valid FCM account. Once you have an account open the Google Developer Console:
-
create a new project. Record the Project Number as "SENDER_ID". You will need this value for your android application.
-
in the
.autopush_endpoint
server config file:- add
fcm_enabled
to enable FCM routing. - add
fcm_creds
. This is a json block with the following format:
{"APP ID": {"projectid": "PROJECT ID NAME", "auth":"PATH TO PRIVATE KEY FILE"}, ...}
(see Configuring for the Google GCM/FCM for more details)
- add
where:
app_id: the URL identifier to be used when registering endpoints.
(e.g. if "reference_test" is chosen here, registration requests should
go to https://updates.push.services.mozilla.com/v1/fcm/reference_test/registration
)
project id name: the name of the Project ID as specified on the https://console.firebase.google.com/ Project Settings > General page.
path to Private Key File: path to the Private Key file provided by the Settings > Service accounts > Firebase Admin SDK page. NOTE: This is *NOT* the "google-services.json" config file.
Additional notes on using the FCM bridge are available on the wiki.
Coding Style Guide
Autopush uses Rust styling guides based on
cargo fmt
and cargo clippy
Testing Style Guide
Given the integration and load tests are written in Python, we follow a few simple style conventions:
- We conform to the PEP 8 standard Style Guide.
- We use type annotations for all variables, functions, and classes.
- We check linting automatically running
make lint
from the root directory. Each subsequent check can be run manually. Consult the Makefile for commands. - We use flake8 as our core style enforcement linter.
- We use black for formatting and isort for import formatting.
- We use mypy for type annotation checking.
- We use pydocstyle for docstring conventions.
- We use bandit for static code security analysis.
Exceptions
Testing
Test Strategy
Autopush is tested using a combination of functional, integration, and performance tests.
Unit tests are written in the same Rust module as the code they are testing. Integration and Load Test code are in the tests/
directory, both written in Python.
Presently, the Autopush test strategy does not require a minimum test coverage percentage for unit and integration tests. However, it is the goal that the service eventually have defined minimum coverage. Load tests results should not go below a minimum performance threshold.
The functional test strategy is three-tiered, composed of:
See the documentation in each given test area for specific details on running and maintaining tests.
Unit Tests
Unit tests allow for testing individual components of code in isolation to ensure they function as expected. Rust's built-in support for writing and running unit tests use the #[cfg(test)]
attribute and the #[test]
attribute.
Best Practices
- Test functions are regular Rust functions annotated with the
#[test]
attribute. - Test functions should be written in the same module as the code they are testing.
- Test functions should be named in a manner that describes the behavior being tested.
For example:
#[test]
fn test_broadcast_change_tracker()
- The use of assertion macros is encouraged. This includes, but is not limited to:
assert_eq!(actual, expected)
,assert_ne!(actual, expected)
,assert!(<condition>)
. - You should group related tests into modules using the
mod
keyword. Furthermore, test modules can be nested to organize tests in a hierarchy.
Running Unit Tests
Run Rust unit tests with the cargo test
command from the root of the directory.
To run a specific test, provide the function name to cargo test
. Ex. cargo test test_function_name
.
Integration Tests
The autopush-rs tests are written in Python and located in the integration test directory.
Testing Configuration
All dependencies are maintained by Poetry and defined in the tests/pyproject.toml
file.
There are a few configuration steps required to run the Python integration tests:
- Depending on your operating system, ensure you have
cmake
andopenssl
installed. If using MacOS, for example, you can usebrew install cmake openssl
. - Build Autopush-rs: from the root directory, execute
cargo build
- Setup Local Bigtable emulator. For more information on Bigtable, see the Bigtable emulation docs in this repo.
- Install the Google Cloud CLI
- Install and run the Google Bigtable Emulator
- Configure the Bigtable emulator by running the following shell script: (Note, this will create a project and instance both named
test
, meaning that the tablename will beprojects/test/instances/test/tables/autopush
)
BIGTABLE_EMULATOR_HOST=localhost:8086 \
scripts/setup_bt.sh
- Create Python virtual environment. It is recommended to use
pyenv virtualenv
:
$ pyenv virtualenv
$ pyenv install 3.12 # install matching version currently used
$ pyenv virtualenv 3.12 push-312 # you can name this whatever you like
$ pyenv local push-312 # sets this venv to activate when entering dir
$ pyenv activate push-312
- Run
poetry install
to install all dependencies for testing.
Running Integration Tests
To run the integration tests, simply run make integration-tests
from your terminal at the root of the project.
You can alter the verbosity and logging output by adding command line flags to the PYTEST_ARGS ?=
variable in the root project Makefile. For example, for greater verbosity and stdout printing, add -vv -s
.
The test output is then emitted in your terminal instance. This includes the name of the tests, whether they pass or fail and any exceptions that are triggered during the test run.
The integration tests make use of pytest markers for filtering tests. These can be
used with the -m
pytest option, or can be used through the following environment variables and
integration-test
make command.
ENVIRONMENT VARIABLE | RELATED MARKER | DESCRIPTION |
---|---|---|
SKIP_SENTRY | sentry | If set will exclude all tests marked with sentry from execution |
TEST_STUB | stub | If set will include all tests marked with stub in execution |
Integration tests in CI will be triggered automatically whenever a commit is pushed to a branch as a part of the CI PR workflow.
Debugging
In some instances after making test changes, the test client can potentially hang in a dangling process. This can result in inaccurate results or tests not running correctly. You can run the following commands to determine the PID's of the offending processes and terminate them:
$ ps -fA | grep autopush
# any result other than grep operation is dangling
$ kill -s KILL <PID>
Firefox Testing
To test a locally running Autopush with Firefox, you will need to edit several config variables in Firefox.
- Open a New Tab.
- Go to
about:config
in the Location bar and hit Enter, accept the disclaimer if it's shown. - Search for
dom.push.serverURL
, make a note of the existing value (you can right-click the preference and chooseReset
to restore the default). - Double click the entry and change it to
ws://localhost:8080/
. - Right click in the page and choose
New -> Boolean
, name itdom.push.testing.allowInsecureServerURL
and set it totrue
.
You should then restart Firefox to begin using your local Autopush.
Debugging
On Android, you can set dom.push.debug
to enable debug logging of Push
via adb logcat
.
For desktop use, you can set dom.push.loglevel
to "debug"
. This will
log all push messages to the Browser Console (Tools > Web Developer >
Browser Console).
Load Tests - Performance Testing
Performance load tests can be found under the tests/load
directory. These tests spawn
multiple clients that connect to Autopush in order to simulate real-world load on the
infrastructure. These tests use the Locust framework and are triggered manually at the
discretion of the Autopush Engineering Team.
For more details see the README.md file in the tests/load
directory.
Release Process
Autopush has a regular 2-3 week release to production depending on developer and QA availability. The developer creating a release should handle all aspects of the following process as they're done closely in order and time.
Versions
Autopush uses a {major}.{minor}.{patch}
version scheme, new {major}
versions are only issued if backwards compatibility is affected. Patch
versions are used if a critical bug occurs after production deployment
that requires a bug fix immediately.
Dev Releases
When changes are committed to the master
branch, an operations Jenkins
instance will build and deploy the code automatically to the dev
environment.
The development environment can be verified at its endpoint/wss endpoints:
- Websocket: wss://autopush.dev.mozaws.net/
- Endpoint: https://updates-autopush.dev.mozaws.net/
Stage/Production Releases
Pre-Requisites
To create a release, you will need appropriate access to the autopush GitHub repository with push permission.
You will also need clog
installed to create the CHANGELOG.md
update.
Release Steps
In these steps, the {version}
refers to the full version of the
release.
i.e. If a new minor version is being released after 1.21.0
, the
{version}
would be 1.22.0
.
-
Switch to the
master
branch of autopush. -
git pull
to ensure the local copy is completely up-to-date. -
git diff origin/master
to ensure there are no local staged or uncommited changes. -
Run
tox
locally to ensure no artifacts or other local changes that might break tests have been introduced. -
Change to the release branch.
If this is a new major/minor release,
git checkout -b release/{major}.{minor}
to create a new release branch.If this is a new patch release, you will first need to ensure you have the minor release branch checked out, then:
git checkout release/{major}.{minor}
git pull
to ensure the branch is up-to-date.git merge master
to merge the new changes into the release branch.
Note that the release branch does not include a ``{patch}`` component.
-
Edit
autopush/__init__.py
so that the version number reflects the desired release version. -
Run
clog --setversion {version}
, verify changes were properly accounted for inCHANGELOG.md
. -
git add CHANGELOG.md autopush/__init__.py
to add the two changes to the new release commit. -
git commit -m "chore: tag {version}"
to commit the new version and record of changes. -
git tag -s -m "chore: tag {version}" {version}
to create a signed tag of the current HEAD commit for release. -
git push --set-upstream origin release/{major}.{minor}
to push the commits to a new origin release branch. -
git push --tags origin release/{major}.{minor}
to push the tags to the release branch. -
Submit a pull request on github to merge the release branch to master.
-
Go to the autopush releases page, you should see the new tag with no release information under it.
-
Click the
Draft a new release
button. -
Enter the tag for
Tag version
. -
Copy/paste the changes from
CHANGELOG.md
into the release description omitting the top 2 lines (the a name HTML and the version) of the file.Keep these changes handy, you'll need them again shortly.
-
Once the release branch pull request is approved and merged, click
Publish Release
. -
File a bug for stage deployment in Bugzilla, in the
Cloud Services
product, under theOperations: Deployment Requests
component. It should be titledPlease deploy autopush {major}.{minor} to STAGE
and include the changes in the Description along with any additional instructions to operations regarding deployment changes and special test cases if needed for QA to verify.
At this point, QA will take-over, verify stage, and create a production deployment Bugzilla ticket. QA will also schedule production deployment for the release.
HTTP Endpoints for Notifications
Autopush exposes three HTTP endpoints:
/wpush/...
This is tied to the Endpoint Handler
~autopush.web.webpush.WebPushHandler
This endpoint is returned by the
Push registration process and is used by the AppServer
to send Push
alerts to the Application. See send
.
/m/...
This is tied to ~autopush.web.message.MessageHandler
. This endpoint
allows a message that has not yet been delivered to be deleted. See
cancel
.
/v1/.../.../registration/...
This is tied to the reg_calls
Handlers. This endpoint is used by
devices that wish to use bridging
protocols to register new channels.
NOTE: This is not intended to be used by app developers. Please see
the Web Push API on
MDN for how
to use WebPush. See bridge_api
.
Push Service HTTP API
The following section describes how remote servers can send Push Notifications to apps running on remote User Agents.
Lexicon
{UAID} The Push User Agent Registration ID
Push assigns each remote recipient a unique identifier. {UAID}s are UUIDs in lower case, undashed format. (e.g. '01234567abcdabcdabcd01234567abcd') This value is assigned during Registration
{CHID}
The Channel
Subscription ID
Push assigns a unique identifier for each subscription for a given {UAID}. Like {UAID}s, {CHID}s are UUIDs, but in lower case, dashed format( e.g. '01234567-abcd-abcd-abcd-0123456789ab'). The User Agent usually creates this value and passes it as part of the Channel Subscription. If no value is supplied, the server will create and return one.
{message-id}
The unique Message ID
Push assigns each message for a given Channel Subscription a unique identifier. This value is assigned during Send Notification.
Response
The responses will be JSON formatted objects. In addition, API calls
will return valid HTTP error codes (see errors
sub-section for
descriptions of specific errors).
For non-success responses, an extended error code object will be returned with the following format:
{
"code": 404, // matches the HTTP status code
"errno": 103, // stable application-level error number
"error": "Not Found", // string representation of the status
"message": "No message found" // optional additional error information
}
See Errors for a list of the errors, causes, and potential resolutions.
Calls
Send Notification
Send a notification to the given endpoint identified by its push_endpoint
. Please note, the Push endpoint
URL (which is what is used to send notifications) should be considered
"opaque". We reserve the right to change any portion of the Push URL in
future provisioned URLs.
The Topic
HTTP header allows new messages
to replace previously sent, unreceived subscription updates. See
topic
.
Call:
https://updates.push.services.mozilla.com/wpush/v1/...
If the client is using webpush style data delivery, then the body in its entirety will be regarded as the data payload for the message per the WebPush spec.
Note Mozilla reserves the right to change the endpoint at any time. Please do not "optimize" by only storing the last token element of the URI. There will be tears.
Note Some bridged connections require data transcription and may limit the length of data that can be sent. For instance, using a GCM/FCM bridge will require that the data be converted to base64. This means that data may be limited to only 2744 bytes instead of the normal 4096 bytes.
Reply:
{"message-id": {message-id}}
Return Codes:
Note The Push RFC notes the HTTP response codes that should be returned. Autopush cannot support the Push Message Receipt at this time, so Autopush should only return a 201 response. (Previously, Autopush would return a 202 indicating that the message was stored for later retrieval.) Autopush cannot guarantee end-to-end delivery of a message due to the nature of how it handles subscription updates to mobile devices. The "Bridge" protocols do not support this feature, and if possible, Autopush should not disclose the type of UserAgent to the Subscription provider.
-
statuscode 404
Push subscription is invalid. -
statuscode 410 Push subscription is no longer available.
-
statuscode 201
Message delivered to node or bridge the client is connected to.
Message Topics
Message topics allow newer message content to replace previously sent, unread messages. This prevents the UA from displaying multiple messages upon reconnect. A blog post provides an example of how to use Topics, but a summary is provided here.
To specify a Topic, include a Topic
HTTP
header along with your send
. The topic can be any 32 byte
alpha-numeric string (including "_" and "-").
Example topics might be MailMessages
,
Current_Score
, or 20170814-1400_Meeting_Reminder
For example:
curl -X POST \
https://push.services.mozilla.com/wpush/abc123... \
-H "TTL: 86400" \
-H "Topic: new_mail" \
-H "Authorization: Vapid AbCd..." \
...
Would create or replace a message that is valid for the next 24 hours
that has the topic of new_mail
. The body
of this might contain the number of unread messages. If a new message
arrives, the Application Server could send a second message with a body
containing a revised message count.
Later, when the User reconnects, she will only see a single notification containing the latest notification, with the most recent new mail message count.
Cancel Notification
Delete the message given the message_id
.
Call:
https://updates.push.services.mozilla.com/wpush/v1/...
Parameters:
None
Reply:
{}
Return Codes:
See errors.
Push Service Bridge HTTP Interface
Push allows for remote devices to perform some functions using an HTTP interface. This is mostly used by devices that are bridging via an external protocol like GCM/FCM or APNs. All message bodies must be UTF-8 encoded.
API methods requiring Authorization must provide the Authorization header containing the registration secret. The registration secret is returned as "secret" in the registration response.
Lexicon
For the following call definitions:
{type}
The bridge type.
Allowed bridges are gcm
(Google Cloud
Messaging), fcm
(Firebase Cloud
Messaging), and apns
(Apple Push
Notification system)
{app_id}
The bridge specific application identifier
Each bridge may require a unique token that addresses the remote
application For GCM/FCM, this is the SenderID
(or 'project number') and is
pre-negotiated outside of the push service. You can find this number
using the Google developer
console.
For APNS, this value is the "platform" or "channel" of development (e.g.
"firefox", "beta", "gecko", etc.) For our examples, we will use a client
token of "33clienttoken33".
{instance_id}
The bridge specific private identifier token
Each bridge requires a unique token that addresses the application on a given user's device. This is the "Registration Token" for GCM/FCM or "Device Token" for APNS. This is usually the product of the application registering the {instance_id} with the native bridge via the user agent. For our examples, we will use an instance ID of "11-instance-id-11".
{secret}
The registration secret from the Registration call.
Most calls to the HTTP interface require a Authorization header. The Authorization header is a simple bearer token, which has been provided by the Registration call and is preceded by the scheme name "Bearer". For our examples, we will use a registration secret of "00secret00".
An example of the Authorization header would be:
Authorization: Bearer 00secret00
{vapidKey}
The VAPID Key provided by the subscribing third party
The VAPID key is optional and provides a way for an application server to voluntarily identify itself.
Please Note: While the VAPID key is optional, if it is included, the VAPID asserion block must contain a sub
field containing the publishing contact information as a vaild URI designator. (e.g. mailto:admin+webpush@example.org
or https://example.org/contact
). As an example, a minimal VAPID assertion block would contain:
{"aud": "https://updates.push.services.mozilla.com", "exp": 1725468595, "sub": "mailto:admin+webpush@example.com"}
Where exp
and sub
reflect the expiration time and publishing contact information. The contact information is used in case of an issue with use of the Push service and is never used for marketing purposes.
When the VAPID key is provided, autopush will return an endpoint that can only be used by the application server that provided the key.
The VAPID key is formatted as a URL-safe Base64 encoded string with no padding.
Calls
Registration
Request a new UAID registration, Channel ID, and set a bridge type and
3rd party bridge instance ID token for this connection. (See
~autopush.web.registration.NewRegistrationHandler
)
NOTE: This call is designed for devices to register endpoints to be used by bridge protocols. Please see Web Push API for how to use Web Push in your application.
Call:
POST /v1/{type}/{appid}/registration
This call requires no Authorization header.
Parameters:
{"token":{instance_id}, "key": {vapidkey}}
Note
- The VAPID key is optional
- If additional information is required for the bridge, it may be included in the parameters as JSON elements. Currently, no additional information is required.
Reply:
`{"uaid": {UAID}, "secret": {secret},
"endpoint": "https://updates-push...", "channelID": {CHID}}`
example:
POST /v1/fcm/33clienttoken33/registration
{"token": "11-instance-id-11", "key": "AbC12ef0"}
{"uaid": "01234567-0000-1111-2222-0123456789ab",
"secret": "00secret00",
"endpoint": "https://updates-push.services.mozaws.net/push/...",
"channelID": "00000000-0000-1111-2222-0123456789ab"}
Return Codes:
See errors
.
Token updates
Update the current bridge token value. Note, this is a *PUT* call,
since we are updating existing information. (See
~autopush.web.registration.UaidRegistrationHandler
)
Call:
PUT /v1/{type}/{appid}/registration/{uaid}
Authorization: Bearer {secret}
Parameters:
{"token": {instance_id}}
Note
If additional information is required for the bridge, it may be included in the parameters as JSON elements. Currently, no additional information is required.
Reply:
{}
example:
PUT /v1/fcm/33clienttoken33/registration/abcdef012345
Authorization: Bearer 00secret00
{"token": "22-instance-id-22"}
{}
Return Codes:
See errors
.
Channel Subscription
Acquire a new ChannelID for a given UAID. (See
~autopush.web.registration.SubRegistrationHandler
)
Call:
POST /v1/{type}/{app_id}/registration/{uaid}/subscription
Authorization: Bearer {secret}
Parameters:
{key: {vapidKey}}
Note VAPID key is optional
Reply:
{"channelID": {CHID}, "endpoint": "https://updates-push..."}
example:
POST /v1/fcm/33clienttoken33/registration/abcdef012345/subscription
Authorization: Bearer 00secret00
{"key": "AbCd01hk"}
{"channelID": "01234567-0000-1111-2222-0123456789ab",
"endpoint": "https://updates-push.services.mozaws.net/push/..."}
Return Codes:
See errors
.
Unregister UAID (and all associated ChannelID subscriptions)
Indicate that the UAID, and by extension all associated subscriptions,
is no longer valid. (See
~autopush.web.registration.UaidRegistrationHandler
)
Call:
DELETE /v1/{type}/{app_id}/registration/{uaid}
Authorization: Bearer {secret}
Parameters:
{}
Reply:
{}
Return Codes:
See errors
.
Unsubscribe Channel
Remove a given ChannelID subscription from a UAID. (See:
~autopush.web.registration.ChannelRegistrationHandler
)
Call:
DELETE /v1/{type}/{app_id}/registration/{uaid}/subscription/{CHID}
Authorization: Bearer {secret}
Parameters:
{}
Reply:
{}
Return Codes:
See errors
.
Get Known Channels for a UAID
Fetch the known ChannelIDs for a given bridged endpoint. This is useful
to check link status. If no channelIDs are present for a given UAID, an
empty set of channelIDs will be returned. (See:
~autopush.web.registration.UaidRegistrationHandler
)
Call:
GET /v1/{type}/{app_id}/registration/{UAID}/
Authorization: Bearer {secret}
Parameters:
{}
Reply:
{"uaid": {UAID}, "channelIDs": [{ChannelID}, ...]}
example:
GET /v1/gcm/33clienttoken33/registration/abcdef012345/
Authorization: Bearer 00secret00
{}
{"uaid": "abcdef012345",
"channelIDS": ["01234567-0000-1111-2222-0123456789ab", "76543210-0000-1111-2222-0123456789ab"]}
Return Codes:
See errors
.
Error Codes
Autopush uses error codes based on HTTP response
codes. An
error response will contain a JSON body including an additional error
information (see error_resp
).
Unless otherwise specified, all calls return one the following error statuses:
-
20x - Success - The message was accepted for transmission to the client. Please note that the message may still be rejected by the User Agent if there is an error with the message's encryption.
-
301 - Moved + `Location:` if
{client_token}
is invalid (Bridge API Only) - Bridged services (ones that run over third party services like GCM and APNS), may require a new URL be used. Please stop using the old URL immediately and instead use the new URL provided. -
400 - Bad Parameters -- One or more of the parameters specified is invalid. See the following sub-errors indicated by
errno
-
errno 101 - Missing necessary crypto keys - One or more required crypto key elements are missing from this transaction. Refer to the appropriate specification for the requested content-type.
-
errno 108 - Router type is invalid - The URL contains an invalid router type, which may be from URL corruption or an unsupported bridge. Refer to
bridge_api
. -
errno 110 - Invalid crypto keys specified - One or more of the crytpo key elements are invalid. Refer to the appropriate specification for the requested content-type.
-
errno 111 - Missing Required Header - A required crypto element header is missing. Refer to the appropriate specification for the requested content-type.
- Missing TTL Header - Include the Time To Live header (IETF WebPush protocol §6.2)
- Missing Crypto Headers - Include the appropriate encryption headers (WebPush Encryption §3.2 and WebPush VAPID §4)
-
errno 112 - Invalid TTL header value - The Time To Live "TTL" header contains an invalid or unreadable value. Please change to a number of seconds that this message should live, between 0 (message should be dropped immediately if user is unavailable) and 2592000 (hold for delivery within the next approximately 30 days).
-
errno 113 - Invalid Topic header value - The Topic header contains an invalid or unreadable value. Please use only ASCII alphanumeric values [A-Za-z0-9] and a maximum length of 32 bytes..
-
-
401 - Bad Authorization -
Authorization
header is invalid or missing. See the VAPID specification.- errno 109 - Invalid authentication
-
404 - Endpoint Not Found - The URL specified is invalid and should not be used again.
- errno 102 - Invalid URL endpoint
-
410 - Endpoint Not Valid - The URL specified is no longer valid and should no longer be used. A User has become permanently unavailable at this URL.
- errno 103 - Expired URL endpoint
- errno 105 - Endpoint became unavailable during request
- errno 106 - Invalid subscription
-
413 - Payload too large - The body of the message to send is too large. The max data that can be sent is 4028 characters. Please reduce the size of the message.
- errno 104 - Data payload too large
-
500 - Unknown server error - An internal error occurred within the Push Server.
- errno 999 - Unknown error
-
502 - Bad Gateway - The Push Service received an invalid response from an upstream Bridge service.
- errno 900 - Internal Bridge misconfiguration
- errno 901 - Invalid authentication
- errno 902 - An error occurred while establishing a connection
- errno 903 - The request timed out
-
503 - Server temporarily unavaliable. - The Push Service is currently unavailable. See the error number "errno" value to see if retries are available.
- errno 201 - Use exponential back-off for retries
- errno 202 - Immediate retry ok
Glossary
AppServer
A third-party Application Server that delivers notifications to client
applications via Push.
Bridging
Using a third party or proprietary network in order to deliver Push
notifications to an App. This may be preferred for mobile devices where
such a network may improve battery life or other reasons.
Channel
A unique route between an AppServer
and the Application. May also be
referred to as Subscription
CHID
The Channel Subscription ID. Push assigns each subscription (or channel)
a unique identifier.
Message-ID
A unique message ID. Each message for a given subscription is given a
unique identifier that is returned to the AppServer
in the Location
header.
Notification
A message sent to an endpoint node intended for delivery to a HTTP
endpoint. Autopush stores these in the message tables.
Router Type
Every UAID
that connects has a router type. This indicates the type of
routing to use when dispatching notifications. For most clients, this
value will be webpush
. Clients using Bridging
it will use either
gcm
, fcm
, or apns
.
Subscription
A unique route between an AppServer
and the Application. May also be
referred to as a Channel
UAID
The Push User Agent Registration ID. Push assigns each remote recipient
(Firefox client) a unique identifier. These may occasionally be reset by
the Push Service or the client.
WebPush
An IETF standard for communication between Push Services, the clients,
and application servers.
See: https://datatracker.ietf.org/doc/draft-ietf-webpush-protocol/
Migrating to Rust
Progress never comes from resting. One of the significant considerations of running a service that needs to communicate with hundreds of millions of clients is cost. We are forced to continually evaluate and optimize. When a lower cost option is presented, we seriously consider it.
There is some risk, of course, so rapid change is avoided and testing is strongly encouraged. As of early 2018, the decision was made to move the costlier elements of the server to Rust. The rust based application is at autopush-rs.
Why Rust?
Rust is a strongly typed, memory efficient language. It has matured rapidly and offers structure that vastly reduces the memory requirements for running connections. As a bonus, it’s also forced us to handle potential bugs, making the service more reliable.
The current python environment we use (pypy) continues to improve as well, but does not offer the sort of improvements that rust does when it comes to handling socket connections.
To that end we’re continuing to use pypy for the endpoint connection management for the time being.
When is the switch going to happen?
As of the end of June 2018, our rust handler is in testing. We expect to deploy it soon, but since this deployment should not impact external users, we’re not rushing to deploy just to hit an arbitrary milestone. It will be deployed when all parties have determined it’s ready.
What will happen to autopush?
Currently, the plan is to maintain it so long as it’s in production use. Since we plan on continuing to have autopush handle endpoints for some period, even after autopush-rs has been deployed to production and is handling connections. However, we do reserve the right to archive this repo at some future date.