seems the procfile may be wrong

This commit is contained in:
brad stein 2018-10-24 22:19:49 -05:00
parent 0f6049374f
commit ca457e6489
32 changed files with 1500 additions and 4 deletions

View File

@ -81,7 +81,7 @@ CHANNEL_LAYERS = {
# },
# },
'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
'BACKEND': 'asgi_redis.core.RedisChannelLayer',
'CONFIG': {
"hosts": [os.environ.get('REDIS_URL', 'redis://localhost:6379')],
}

View File

@ -1,2 +1,2 @@
release: python manage.py migrate
web: gunicorn ChannelsDrawShareSite.wsgi --log-file -
web: daphne drawshare.asgi:channel_layer --port $PORT --bind 0.0.0.0 -v2
worker: python manage.py runworker -v2

View File

@ -1,5 +1,4 @@
# drawshare/consumers.py
from asgiref.sync import async_to_sync
from channels.generic.websocket import AsyncWebsocketConsumer
import json

View File

@ -0,0 +1,231 @@
asgi_redis
==========
.. image:: https://api.travis-ci.org/django/asgi_redis.svg
:target: https://travis-ci.org/django/asgi_redis
.. image:: https://img.shields.io/pypi/v/asgi_redis.svg
:target: https://pypi.python.org/pypi/asgi_redis
An ASGI channel layer that uses Redis as its backing store, and supports
both a single-server and sharded configurations, as well as group support.
Usage
-----
You'll need to instantiate the channel layer with at least ``hosts``,
and other options if you need them.
Example:
.. code-block:: python
channel_layer = RedisChannelLayer(
host="redis",
db=4,
channel_capacity={
"http.request": 200,
"http.response*": 10,
}
)
``hosts``
~~~~~~~~~
The server(s) to connect to, as either URIs or ``(host, port)`` tuples. Defaults to ``['localhost', 6379]``. Pass multiple hosts to enable sharding, but note that changing the host list will lose some sharded data.
``prefix``
~~~~~~~~~~
Prefix to add to all Redis keys. Defaults to ``asgi:``. If you're running
two or more entirely separate channel layers through the same Redis instance,
make sure they have different prefixes. All servers talking to the same layer
should have the same prefix, though.
``expiry``
~~~~~~~~~~
Message expiry in seconds. Defaults to ``60``. You generally shouldn't need
to change this, but you may want to turn it down if you have peaky traffic you
wish to drop, or up if you have peaky traffic you want to backlog until you
get to it.
``group_expiry``
~~~~~~~~~~~~~~~~
Group expiry in seconds. Defaults to ``86400``. Interface servers will drop
connections after this amount of time; it's recommended you reduce it for a
healthier system that encourages disconnections.
``capacity``
~~~~~~~~~~~~
Default channel capacity. Defaults to ``100``. Once a channel is at capacity,
it will refuse more messages. How this affects different parts of the system
varies; a HTTP server will refuse connections, for example, while Django
sending a response will just wait until there's space.
``channel_capacity``
~~~~~~~~~~~~~~~~~~~~
Per-channel capacity configuration. This lets you tweak the channel capacity
based on the channel name, and supports both globbing and regular expressions.
It should be a dict mapping channel name pattern to desired capacity; if the
dict key is a string, it's intepreted as a glob, while if it's a compiled
``re`` object, it's treated as a regular expression.
This example sets ``http.request`` to 200, all ``http.response!`` channels
to 10, and all ``websocket.send!`` channels to 20:
.. code-block:: python
channel_capacity={
"http.request": 200,
"http.response!*": 10,
re.compile(r"^websocket.send\!.+"): 20,
}
If you want to enforce a matching order, use an ``OrderedDict`` as the
argument; channels will then be matched in the order the dict provides them.
``symmetric_encryption_keys``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Pass this to enable the optional symmetric encryption mode of the backend. To
use it, make sure you have the ``cryptography`` package installed, or specify
the ``cryptography`` extra when you install ``asgi_redis``::
pip install asgi_redis[cryptography]
``symmetric_encryption_keys`` should be a list of strings, with each string
being an encryption key. The first key is always used for encryption; all are
considered for decryption, so you can rotate keys without downtime - just add
a new key at the start and move the old one down, then remove the old one
after the message expiry time has passed.
Data is encrypted both on the wire and at rest in Redis, though we advise
you also route your Redis connections over TLS for higher security; the Redis
protocol is still unencrypted, and the channel and group key names could
potentially contain metadata patterns of use to attackers.
Keys **should have at least 32 bytes of entropy** - they are passed through
the SHA256 hash function before being used as an encryption key. Any string
will work, but the shorter the string, the easier the encryption is to break.
If you're using Django, you may also wish to set this to your site's
``SECRET_KEY`` setting via the ``CHANNEL_LAYERS`` setting:
.. code-block:: python
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"ROUTING": "my_project.routing.channel_routing",
"CONFIG": {
"hosts": ["redis://:password@127.0.0.1:6379/0"],
"symmetric_encryption_keys": [SECRET_KEY],
},
},
}
``connection_kwargs``
---------------------
Optional extra arguments to pass to the ``redis-py`` connection class. Options
include ``socket_connect_timeout``, ``socket_timeout``, ``socket_keepalive``,
and ``socket_keepalive_options``. See the
`redis-py documentation <https://redis-py.readthedocs.io/en/latest/>`_ for more.
Local-and-Remote Mode
---------------------
A "local and remote" mode is also supported, where the Redis channel layer
works in conjunction with a machine-local channel layer (``asgi_ipc``) in order
to route all normal channels over the local layer, while routing all
single-reader and process-specific channels over the Redis layer.
This allows traffic on things like ``http.request`` and ``websocket.receive``
to stay in the local layer and not go through Redis, while still allowing Group
send and sends to arbitrary channels terminated on other machines to work
correctly. It will improve performance and decrease the load on your
Redis cluster, but **it requires all normal channels are consumed on the
same machine**.
In practice, this means you MUST run workers that consume every channel your
application has code to handle on the same machine as your HTTP or WebSocket
terminator. If you fail to do this, requests to that machine will get routed
into only the local queue and hang as nothing is reading them.
To use it, just use the ``asgi_redis.RedisLocalChannelLayer`` class in your
configuration instead of ``RedisChannelLayer`` and make sure you have the
``asgi_ipc`` package installed; no other change is needed.
Sentinel Mode
-------------
"Sentinel" mode is also supported, where the Redis channel layer will connect to
a redis sentinel cluster to find the present Redis master before writing or reading
data.
Sentinel mode supports sharding, but does not support multiple Sentinel clusters. To
run sharding of keys across multiple Redis clusters, use a single sentinel cluster,
but have that sentinel cluster monitor multiple "services". Then in the configuration
for the RedisSentinelChannelLayer, add a list of the service names. You can also
leave the list of services blank, and the layer will pull all services that are
configured on the sentinel master.
Redis Sentinel mode does not support URL-style connection strings, just tuple-based ones.
Configuration for Sentinel mode looks like this:
.. code-block:: python
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisSentinelChannelLayer",
"CONFIG": {
"hosts": [("10.0.0.1", 26739), ("10.0.0.2", 26379), ("10.0.0.3", 26379)],
"services": ["shard1", "shard2", "shard3"],
},
},
}
The "shard1", "shard2", etc entries correspond to the name of the service configured in your
redis `sentinel.conf` file. For example, if your `sentinel.conf` says ``sentinel monitor local 127.0.0.1 6379 1``
then you would want to include "local" as a service in the `RedisSentinelChannelLayer` configuration.
You may also pass a ``sentinel_refresh_interval`` value in the ``CONFIG``, which
will enable caching of the Sentinel results for the specified number of seconds.
This is recommended to reduce the need to query Sentinel every time; even a
low value of 5 seconds will significantly reduce overhead.
Dependencies
------------
Redis >= 2.6 is required for `asgi_redis`. It supports Python 2.7, 3.4,
3.5 and 3.6.
Contributing
------------
Please refer to the
`main Channels contributing docs <https://github.com/django/channels/blob/master/CONTRIBUTING.rst>`_.
That also contains advice on how to set up the development environment and run the tests.
Maintenance and Security
------------------------
To report security issues, please contact security@djangoproject.com. For GPG
signatures and more security process information, see
https://docs.djangoproject.com/en/dev/internals/security/.
To report bugs or request new features, please open a new GitHub issue.
This repository is part of the Channels project. For the shepherd and maintenance team, please see the
`main Channels readme <https://github.com/django/channels/blob/master/README.rst>`_.

View File

@ -0,0 +1,262 @@
Metadata-Version: 2.0
Name: asgi-redis
Version: 1.4.3
Summary: Redis-backed ASGI channel layer implementation
Home-page: http://github.com/django/asgi_redis/
Author: Django Software Foundation
Author-email: foundation@djangoproject.com
License: BSD
Description-Content-Type: UNKNOWN
Platform: UNKNOWN
Requires-Dist: asgiref (~=1.1.2)
Requires-Dist: msgpack-python
Requires-Dist: redis (~=2.10.6)
Requires-Dist: six
Provides-Extra: cryptography
Requires-Dist: cryptography (>=1.3.0); extra == 'cryptography'
Provides-Extra: tests
Requires-Dist: asgi-ipc; extra == 'tests'
Requires-Dist: channels (>=1.1.0); extra == 'tests'
Requires-Dist: cryptography (>=1.3.0); extra == 'tests'
Requires-Dist: cryptography (>=1.3.0); extra == 'tests'
Requires-Dist: pytest (>=3.0); extra == 'tests'
Requires-Dist: pytest-django (>=3.0); extra == 'tests'
Requires-Dist: requests (>=2.12); extra == 'tests'
Requires-Dist: twisted (>=17.1); extra == 'tests'
Requires-Dist: txredisapi; extra == 'tests'
Requires-Dist: websocket-client (>=0.40); extra == 'tests'
Provides-Extra: twisted
Requires-Dist: twisted (>=17.1); extra == 'twisted'
Requires-Dist: txredisapi; extra == 'twisted'
asgi_redis
==========
.. image:: https://api.travis-ci.org/django/asgi_redis.svg
:target: https://travis-ci.org/django/asgi_redis
.. image:: https://img.shields.io/pypi/v/asgi_redis.svg
:target: https://pypi.python.org/pypi/asgi_redis
An ASGI channel layer that uses Redis as its backing store, and supports
both a single-server and sharded configurations, as well as group support.
Usage
-----
You'll need to instantiate the channel layer with at least ``hosts``,
and other options if you need them.
Example:
.. code-block:: python
channel_layer = RedisChannelLayer(
host="redis",
db=4,
channel_capacity={
"http.request": 200,
"http.response*": 10,
}
)
``hosts``
~~~~~~~~~
The server(s) to connect to, as either URIs or ``(host, port)`` tuples. Defaults to ``['localhost', 6379]``. Pass multiple hosts to enable sharding, but note that changing the host list will lose some sharded data.
``prefix``
~~~~~~~~~~
Prefix to add to all Redis keys. Defaults to ``asgi:``. If you're running
two or more entirely separate channel layers through the same Redis instance,
make sure they have different prefixes. All servers talking to the same layer
should have the same prefix, though.
``expiry``
~~~~~~~~~~
Message expiry in seconds. Defaults to ``60``. You generally shouldn't need
to change this, but you may want to turn it down if you have peaky traffic you
wish to drop, or up if you have peaky traffic you want to backlog until you
get to it.
``group_expiry``
~~~~~~~~~~~~~~~~
Group expiry in seconds. Defaults to ``86400``. Interface servers will drop
connections after this amount of time; it's recommended you reduce it for a
healthier system that encourages disconnections.
``capacity``
~~~~~~~~~~~~
Default channel capacity. Defaults to ``100``. Once a channel is at capacity,
it will refuse more messages. How this affects different parts of the system
varies; a HTTP server will refuse connections, for example, while Django
sending a response will just wait until there's space.
``channel_capacity``
~~~~~~~~~~~~~~~~~~~~
Per-channel capacity configuration. This lets you tweak the channel capacity
based on the channel name, and supports both globbing and regular expressions.
It should be a dict mapping channel name pattern to desired capacity; if the
dict key is a string, it's intepreted as a glob, while if it's a compiled
``re`` object, it's treated as a regular expression.
This example sets ``http.request`` to 200, all ``http.response!`` channels
to 10, and all ``websocket.send!`` channels to 20:
.. code-block:: python
channel_capacity={
"http.request": 200,
"http.response!*": 10,
re.compile(r"^websocket.send\!.+"): 20,
}
If you want to enforce a matching order, use an ``OrderedDict`` as the
argument; channels will then be matched in the order the dict provides them.
``symmetric_encryption_keys``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Pass this to enable the optional symmetric encryption mode of the backend. To
use it, make sure you have the ``cryptography`` package installed, or specify
the ``cryptography`` extra when you install ``asgi_redis``::
pip install asgi_redis[cryptography]
``symmetric_encryption_keys`` should be a list of strings, with each string
being an encryption key. The first key is always used for encryption; all are
considered for decryption, so you can rotate keys without downtime - just add
a new key at the start and move the old one down, then remove the old one
after the message expiry time has passed.
Data is encrypted both on the wire and at rest in Redis, though we advise
you also route your Redis connections over TLS for higher security; the Redis
protocol is still unencrypted, and the channel and group key names could
potentially contain metadata patterns of use to attackers.
Keys **should have at least 32 bytes of entropy** - they are passed through
the SHA256 hash function before being used as an encryption key. Any string
will work, but the shorter the string, the easier the encryption is to break.
If you're using Django, you may also wish to set this to your site's
``SECRET_KEY`` setting via the ``CHANNEL_LAYERS`` setting:
.. code-block:: python
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"ROUTING": "my_project.routing.channel_routing",
"CONFIG": {
"hosts": ["redis://:password@127.0.0.1:6379/0"],
"symmetric_encryption_keys": [SECRET_KEY],
},
},
}
``connection_kwargs``
---------------------
Optional extra arguments to pass to the ``redis-py`` connection class. Options
include ``socket_connect_timeout``, ``socket_timeout``, ``socket_keepalive``,
and ``socket_keepalive_options``. See the
`redis-py documentation <https://redis-py.readthedocs.io/en/latest/>`_ for more.
Local-and-Remote Mode
---------------------
A "local and remote" mode is also supported, where the Redis channel layer
works in conjunction with a machine-local channel layer (``asgi_ipc``) in order
to route all normal channels over the local layer, while routing all
single-reader and process-specific channels over the Redis layer.
This allows traffic on things like ``http.request`` and ``websocket.receive``
to stay in the local layer and not go through Redis, while still allowing Group
send and sends to arbitrary channels terminated on other machines to work
correctly. It will improve performance and decrease the load on your
Redis cluster, but **it requires all normal channels are consumed on the
same machine**.
In practice, this means you MUST run workers that consume every channel your
application has code to handle on the same machine as your HTTP or WebSocket
terminator. If you fail to do this, requests to that machine will get routed
into only the local queue and hang as nothing is reading them.
To use it, just use the ``asgi_redis.RedisLocalChannelLayer`` class in your
configuration instead of ``RedisChannelLayer`` and make sure you have the
``asgi_ipc`` package installed; no other change is needed.
Sentinel Mode
-------------
"Sentinel" mode is also supported, where the Redis channel layer will connect to
a redis sentinel cluster to find the present Redis master before writing or reading
data.
Sentinel mode supports sharding, but does not support multiple Sentinel clusters. To
run sharding of keys across multiple Redis clusters, use a single sentinel cluster,
but have that sentinel cluster monitor multiple "services". Then in the configuration
for the RedisSentinelChannelLayer, add a list of the service names. You can also
leave the list of services blank, and the layer will pull all services that are
configured on the sentinel master.
Redis Sentinel mode does not support URL-style connection strings, just tuple-based ones.
Configuration for Sentinel mode looks like this:
.. code-block:: python
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisSentinelChannelLayer",
"CONFIG": {
"hosts": [("10.0.0.1", 26739), ("10.0.0.2", 26379), ("10.0.0.3", 26379)],
"services": ["shard1", "shard2", "shard3"],
},
},
}
The "shard1", "shard2", etc entries correspond to the name of the service configured in your
redis `sentinel.conf` file. For example, if your `sentinel.conf` says ``sentinel monitor local 127.0.0.1 6379 1``
then you would want to include "local" as a service in the `RedisSentinelChannelLayer` configuration.
You may also pass a ``sentinel_refresh_interval`` value in the ``CONFIG``, which
will enable caching of the Sentinel results for the specified number of seconds.
This is recommended to reduce the need to query Sentinel every time; even a
low value of 5 seconds will significantly reduce overhead.
Dependencies
------------
Redis >= 2.6 is required for `asgi_redis`. It supports Python 2.7, 3.4,
3.5 and 3.6.
Contributing
------------
Please refer to the
`main Channels contributing docs <https://github.com/django/channels/blob/master/CONTRIBUTING.rst>`_.
That also contains advice on how to set up the development environment and run the tests.
Maintenance and Security
------------------------
To report security issues, please contact security@djangoproject.com. For GPG
signatures and more security process information, see
https://docs.djangoproject.com/en/dev/internals/security/.
To report bugs or request new features, please open a new GitHub issue.
This repository is part of the Channels project. For the shepherd and maintenance team, please see the
`main Channels readme <https://github.com/django/channels/blob/master/README.rst>`_.

View File

@ -0,0 +1,17 @@
asgi_redis/__init__.py,sha256=W8hAUAEBfqArsTLUAOeTeuFDt1zzh5DXmQ2z_yJakbA,149
asgi_redis/core.py,sha256=4RZbM6c9pssBJKTvZyQRDuFFh1QtsZUX9U63SQXlpMo,24944
asgi_redis/local.py,sha256=KuD0uHWGDTjzqM1ghx0bMV8vbHeoaKwWCNEefH2OdFI,2996
asgi_redis/sentinel.py,sha256=Y6inFeKcw7OKptDXxlSNKjcKJeW1BYE4oRMlP85t_uY,7175
asgi_redis/twisted_utils.py,sha256=_dkfBylyaWEWTeuJWQnEHkd10fVn0vPLJGmknnNU0jM,583
asgi_redis-1.4.3.dist-info/DESCRIPTION.rst,sha256=Kpc9zMet16qCda9-NFYWEYCD4bMJMTKMYKI88LIGAg8,8761
asgi_redis-1.4.3.dist-info/METADATA,sha256=EF1SwGxig-g5tap26-wU4EHYNWtBuYKn9yXLZRT4TiI,9926
asgi_redis-1.4.3.dist-info/RECORD,,
asgi_redis-1.4.3.dist-info/WHEEL,sha256=o2k-Qa-RMNIJmUdIc7KU6VWR_ErNRbWNlxDIpl7lm34,110
asgi_redis-1.4.3.dist-info/metadata.json,sha256=aEBdEJbRZRZxG5nDwAyuUw-58AYK2_LsUIf4BaIIG6k,1025
asgi_redis-1.4.3.dist-info/top_level.txt,sha256=GWeVhjHDoDp6wrClJMxtUDSvrogkSR9-V6v5IybU5_o,11
asgi_redis-1.4.3.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
asgi_redis/__pycache__/local.cpython-36.pyc,,
asgi_redis/__pycache__/core.cpython-36.pyc,,
asgi_redis/__pycache__/twisted_utils.cpython-36.pyc,,
asgi_redis/__pycache__/__init__.cpython-36.pyc,,
asgi_redis/__pycache__/sentinel.cpython-36.pyc,,

View File

@ -0,0 +1,6 @@
Wheel-Version: 1.0
Generator: bdist_wheel (0.29.0)
Root-Is-Purelib: true
Tag: py2-none-any
Tag: py3-none-any

View File

@ -0,0 +1 @@
{"description_content_type": "UNKNOWN", "extensions": {"python.details": {"contacts": [{"email": "foundation@djangoproject.com", "name": "Django Software Foundation", "role": "author"}], "document_names": {"description": "DESCRIPTION.rst"}, "project_urls": {"Home": "http://github.com/django/asgi_redis/"}}}, "extras": ["cryptography", "tests", "twisted"], "generator": "bdist_wheel (0.29.0)", "license": "BSD", "metadata_version": "2.0", "name": "asgi-redis", "run_requires": [{"extra": "tests", "requires": ["asgi-ipc", "channels (>=1.1.0)", "cryptography (>=1.3.0)", "cryptography (>=1.3.0)", "pytest (>=3.0)", "pytest-django (>=3.0)", "requests (>=2.12)", "twisted (>=17.1)", "txredisapi", "websocket-client (>=0.40)"]}, {"requires": ["asgiref (~=1.1.2)", "msgpack-python", "redis (~=2.10.6)", "six"]}, {"extra": "cryptography", "requires": ["cryptography (>=1.3.0)"]}, {"extra": "twisted", "requires": ["twisted (>=17.1)", "txredisapi"]}], "summary": "Redis-backed ASGI channel layer implementation", "version": "1.4.3"}

View File

@ -0,0 +1 @@
asgi_redis

View File

@ -0,0 +1,5 @@
from .core import RedisChannelLayer
from .local import RedisLocalChannelLayer
from .sentinel import RedisSentinelChannelLayer
__version__ = '1.4.3'

View File

@ -0,0 +1,640 @@
from __future__ import unicode_literals
import base64
import binascii
import hashlib
import itertools
import msgpack
import random
import redis
import six
import string
import time
import uuid
try:
import txredisapi
except ImportError:
pass
from asgiref.base_layer import BaseChannelLayer
from .twisted_utils import defer
class UnsupportedRedis(Exception):
pass
class BaseRedisChannelLayer(BaseChannelLayer):
blpop_timeout = 5
global_statistics_expiry = 86400
channel_statistics_expiry = 3600
global_stats_key = '#global#' # needs to be invalid as a channel name
def __init__(
self,
expiry=60,
hosts=None,
prefix="asgi:",
group_expiry=86400,
capacity=100,
channel_capacity=None,
symmetric_encryption_keys=None,
stats_prefix="asgi-meta:",
connection_kwargs=None,
):
super(BaseRedisChannelLayer, self).__init__(
expiry=expiry,
group_expiry=group_expiry,
capacity=capacity,
channel_capacity=channel_capacity,
)
assert isinstance(prefix, six.text_type), "Prefix must be unicode"
socket_timeout = connection_kwargs and connection_kwargs.get("socket_timeout", None)
if socket_timeout and socket_timeout < self.blpop_timeout:
raise ValueError("The socket timeout must be at least %s seconds" % self.blpop_timeout)
self.prefix = prefix
self.stats_prefix = stats_prefix
# Decide on a unique client prefix to use in ! sections
# TODO: ensure uniqueness better, e.g. Redis keys with SETNX
self.client_prefix = "".join(random.choice(string.ascii_letters) for i in range(8))
self._setup_encryption(symmetric_encryption_keys)
### Setup ###
def _setup_encryption(self, symmetric_encryption_keys):
# See if we can do encryption if they asked
if symmetric_encryption_keys:
if isinstance(symmetric_encryption_keys, six.string_types):
raise ValueError("symmetric_encryption_keys must be a list of possible keys")
try:
from cryptography.fernet import MultiFernet
except ImportError:
raise ValueError("Cannot run with encryption without 'cryptography' installed.")
sub_fernets = [self.make_fernet(key) for key in symmetric_encryption_keys]
self.crypter = MultiFernet(sub_fernets)
else:
self.crypter = None
def _register_scripts(self):
connection = self.connection(None)
self.chansend = connection.register_script(self.lua_chansend)
self.lpopmany = connection.register_script(self.lua_lpopmany)
self.delprefix = connection.register_script(self.lua_delprefix)
self.incrstatcounters = connection.register_script(self.lua_incrstatcounters)
### ASGI API ###
extensions = ["groups", "flush", "statistics"]
try:
import txredisapi
except ImportError:
pass
else:
extensions.append("twisted")
def send(self, channel, message):
# Typecheck
assert isinstance(message, dict), "message is not a dict"
assert self.valid_channel_name(channel), "Channel name not valid"
# Make sure the message does not contain reserved keys
assert "__asgi_channel__" not in message
# If it's a process-local channel, strip off local part and stick full name in message
if "!" in channel:
message = dict(message.items())
message['__asgi_channel__'] = channel
channel = self.non_local_name(channel)
# Write out message into expiring key (avoids big items in list)
# TODO: Use extended set, drop support for older redis?
message_key = self.prefix + uuid.uuid4().hex
channel_key = self.prefix + channel
# Pick a connection to the right server - consistent for response
# channels, random for normal channels
if "!" in channel or "?" in channel:
index = self.consistent_hash(channel)
connection = self.connection(index)
else:
index = next(self._send_index_generator)
connection = self.connection(index)
# Use the Lua function to do the set-and-push
try:
self.chansend(
keys=[message_key, channel_key],
args=[self.serialize(message), self.expiry, self.get_capacity(channel)],
client=connection,
)
self._incr_statistics_counter(
stat_name=self.STAT_MESSAGES_COUNT,
channel=channel,
connection=connection,
)
except redis.exceptions.ResponseError as e:
# The Lua script handles capacity checking and sends the "full" error back
if e.args[0] == "full":
self._incr_statistics_counter(
stat_name=self.STAT_CHANNEL_FULL,
channel=channel,
connection=connection,
)
raise self.ChannelFull
elif "unknown command" in e.args[0]:
raise UnsupportedRedis(
"Redis returned an error (%s). Please ensure you're running a "
" version of redis that is supported by asgi_redis." % e.args[0])
else:
# Let any other exception bubble up
raise
def receive(self, channels, block=False):
# List name get
indexes = self._receive_list_names(channels)
# Short circuit if no channels
if indexes is None:
return None, None
# Get a message from one of our channels
while True:
got_expired_content = False
# Try each index:channels pair at least once or until a result is returned
for index, list_names in indexes.items():
# Shuffle list_names to avoid the first ones starving others of workers
random.shuffle(list_names)
# Open a connection
connection = self.connection(index)
# Pop off any waiting message
if block:
result = connection.blpop(list_names, timeout=self.blpop_timeout)
else:
result = self.lpopmany(keys=list_names, client=connection)
if result:
content = connection.get(result[1])
connection.delete(result[1])
if content is None:
# If the content key expired, keep going.
got_expired_content = True
continue
# Return the channel it's from and the message
channel = result[0][len(self.prefix):].decode("utf8")
message = self.deserialize(content)
# If there is a full channel name stored in the message, unpack it.
if "__asgi_channel__" in message:
channel = message['__asgi_channel__']
del message['__asgi_channel__']
return channel, message
# If we only got expired content, try again
if got_expired_content:
continue
else:
return None, None
def _receive_list_names(self, channels):
"""
Inner logic of receive; takes channels, groups by shard, and
returns {connection_index: list_names ...} if a query is needed or
None for a vacuously empty response.
"""
# Short circuit if no channels
if not channels:
return None
# Check channel names are valid
channels = list(channels)
assert all(
self.valid_channel_name(channel, receive=True) for channel in channels
), "One or more channel names invalid"
# Work out what servers to listen on for the given channels
indexes = {}
index = next(self._receive_index_generator)
for channel in channels:
if "!" in channel or "?" in channel:
indexes.setdefault(self.consistent_hash(channel), []).append(
self.prefix + channel,
)
else:
indexes.setdefault(index, []).append(
self.prefix + channel,
)
return indexes
def new_channel(self, pattern):
assert isinstance(pattern, six.text_type)
# Keep making channel names till one isn't present.
while True:
random_string = "".join(random.choice(string.ascii_letters) for i in range(12))
assert pattern.endswith("?")
new_name = pattern + random_string
# Get right connection
index = self.consistent_hash(new_name)
connection = self.connection(index)
# Check to see if it's in the connected Redis.
# This fails to stop collisions for sharding where the channel is
# non-single-listener, but that seems very unlikely.
key = self.prefix + new_name
if not connection.exists(key):
return new_name
### ASGI Group extension ###
def group_add(self, group, channel):
"""
Adds the channel to the named group for at least 'expiry'
seconds (expiry defaults to message expiry if not provided).
"""
assert self.valid_group_name(group), "Group name not valid"
assert self.valid_channel_name(channel), "Channel name not valid"
group_key = self._group_key(group)
connection = self.connection(self.consistent_hash(group))
# Add to group sorted set with creation time as timestamp
connection.zadd(
group_key,
**{channel: time.time()}
)
# Set both expiration to be group_expiry, since everything in
# it at this point is guaranteed to expire before that
connection.expire(group_key, self.group_expiry)
def group_discard(self, group, channel):
"""
Removes the channel from the named group if it is in the group;
does nothing otherwise (does not error)
"""
assert self.valid_group_name(group), "Group name not valid"
assert self.valid_channel_name(channel), "Channel name not valid"
key = self._group_key(group)
self.connection(self.consistent_hash(group)).zrem(
key,
channel,
)
def group_channels(self, group):
"""
Returns all channels in the group as an iterable.
"""
key = self._group_key(group)
connection = self.connection(self.consistent_hash(group))
# Discard old channels based on group_expiry
connection.zremrangebyscore(key, 0, int(time.time()) - self.group_expiry)
# Return current lot
return [x.decode("utf8") for x in connection.zrange(
key,
0,
-1,
)]
def send_group(self, group, message):
"""
Sends a message to the entire group.
"""
assert self.valid_group_name(group), "Group name not valid"
# TODO: More efficient implementation (lua script per shard?)
for channel in self.group_channels(group):
try:
self.send(channel, message)
except self.ChannelFull:
pass
def _group_key(self, group):
return ("%s:group:%s" % (self.prefix, group)).encode("utf8")
### Twisted extension ###
@defer.inlineCallbacks
def receive_twisted(self, channels):
"""
Twisted-native implementation of receive.
"""
# List name get
indexes = self._receive_list_names(channels)
# Short circuit if no channels
if indexes is None:
defer.returnValue((None, None))
# Get a message from one of our channels
while True:
got_expired_content = False
# Try each index:channels pair at least once or until a result is returned
for index, list_names in indexes.items():
# Shuffle list_names to avoid the first ones starving others of workers
random.shuffle(list_names)
# Get a sync connection for conn details
sync_connection = self.connection(index)
twisted_connection = yield txredisapi.ConnectionPool(
host=sync_connection.connection_pool.connection_kwargs['host'],
port=sync_connection.connection_pool.connection_kwargs['port'],
dbid=sync_connection.connection_pool.connection_kwargs['db'],
password=sync_connection.connection_pool.connection_kwargs['password'],
)
try:
# Pop off any waiting message
result = yield twisted_connection.blpop(list_names, timeout=self.blpop_timeout)
if result:
content = yield twisted_connection.get(result[1])
# If the content key expired, keep going.
if content is None:
got_expired_content = True
continue
# Return the channel it's from and the message
channel = result[0][len(self.prefix):]
message = self.deserialize(content)
# If there is a full channel name stored in the message, unpack it.
if "__asgi_channel__" in message:
channel = message['__asgi_channel__']
del message['__asgi_channel__']
defer.returnValue((channel, message))
finally:
yield twisted_connection.disconnect()
# If we only got expired content, try again
if got_expired_content:
continue
else:
defer.returnValue((None, None))
### statistics extension ###
STAT_MESSAGES_COUNT = 'messages_count'
STAT_MESSAGES_PENDING = 'messages_pending'
STAT_MESSAGES_MAX_AGE = 'messages_max_age'
STAT_CHANNEL_FULL = 'channel_full_count'
def _count_global_stats(self, connection_list):
statistics = {
self.STAT_MESSAGES_COUNT: 0,
self.STAT_CHANNEL_FULL: 0,
}
prefix = self.stats_prefix + self.global_stats_key
for connection in connection_list:
messages_count, channel_full_count = connection.mget(
':'.join((prefix, self.STAT_MESSAGES_COUNT)),
':'.join((prefix, self.STAT_CHANNEL_FULL)),
)
statistics[self.STAT_MESSAGES_COUNT] += int(messages_count or 0)
statistics[self.STAT_CHANNEL_FULL] += int(channel_full_count or 0)
return statistics
def _count_channel_stats(self, channel, connections):
statistics = {
self.STAT_MESSAGES_COUNT: 0,
self.STAT_MESSAGES_PENDING: 0,
self.STAT_MESSAGES_MAX_AGE: 0,
self.STAT_CHANNEL_FULL: 0,
}
prefix = self.stats_prefix + channel
channel_key = self.prefix + channel
for connection in connections:
messages_count, channel_full_count = connection.mget(
':'.join((prefix, self.STAT_MESSAGES_COUNT)),
':'.join((prefix, self.STAT_CHANNEL_FULL)),
)
statistics[self.STAT_MESSAGES_COUNT] += int(messages_count or 0)
statistics[self.STAT_CHANNEL_FULL] += int(channel_full_count or 0)
statistics[self.STAT_MESSAGES_PENDING] += connection.llen(channel_key)
oldest_message = connection.lindex(channel_key, 0)
if oldest_message:
messages_age = self.expiry - connection.ttl(oldest_message)
statistics[self.STAT_MESSAGES_MAX_AGE] = max(statistics[self.STAT_MESSAGES_MAX_AGE], messages_age)
return statistics
def _incr_statistics_counter(self, stat_name, channel, connection):
""" helper function to intrement counter stats in one go """
self.incrstatcounters(
keys=[
"{prefix}{channel}:{stat_name}".format(
prefix=self.stats_prefix,
channel=channel,
stat_name=stat_name,
),
"{prefix}{global_key}:{stat_name}".format(
prefix=self.stats_prefix,
global_key=self.global_stats_key,
stat_name=stat_name,
)
],
args=[self.channel_statistics_expiry, self.global_statistics_expiry],
client=connection,
)
### Serialization ###
def serialize(self, message):
"""
Serializes message to a byte string.
"""
value = msgpack.packb(message, use_bin_type=True)
if self.crypter:
value = self.crypter.encrypt(value)
return value
def deserialize(self, message):
"""
Deserializes from a byte string.
"""
if self.crypter:
message = self.crypter.decrypt(message, self.expiry + 10)
return msgpack.unpackb(message, encoding="utf8")
### Redis Lua scripts ###
# Single-command channel send. Returns error if over capacity.
# Keys: message, channel_list
# Args: content, expiry, capacity
lua_chansend = """
if redis.call('llen', KEYS[2]) >= tonumber(ARGV[3]) then
return redis.error_reply("full")
end
redis.call('set', KEYS[1], ARGV[1])
redis.call('expire', KEYS[1], ARGV[2])
redis.call('rpush', KEYS[2], KEYS[1])
redis.call('expire', KEYS[2], ARGV[2] + 1)
"""
# Single-command to increment counter stats.
# Keys: channel_stat, global_stat
# Args: channel_stat_expiry, global_stat_expiry
lua_incrstatcounters = """
redis.call('incr', KEYS[1])
redis.call('expire', KEYS[1], ARGV[1])
redis.call('incr', KEYS[2])
redis.call('expire', KEYS[2], ARGV[2])
"""
lua_lpopmany = """
for keyCount = 1, #KEYS do
local result = redis.call('LPOP', KEYS[keyCount])
if result then
return {KEYS[keyCount], result}
end
end
return {nil, nil}
"""
lua_delprefix = """
local keys = redis.call('keys', ARGV[1])
for i=1,#keys,5000 do
redis.call('del', unpack(keys, i, math.min(i+4999, #keys)))
end
"""
### Internal functions ###
def consistent_hash(self, value):
"""
Maps the value to a node value between 0 and 4095
using CRC, then down to one of the ring nodes.
"""
if isinstance(value, six.text_type):
value = value.encode("utf8")
bigval = binascii.crc32(value) & 0xfff
ring_divisor = 4096 / float(self.ring_size)
return int(bigval / ring_divisor)
def make_fernet(self, key):
"""
Given a single encryption key, returns a Fernet instance using it.
"""
from cryptography.fernet import Fernet
if isinstance(key, six.text_type):
key = key.encode("utf8")
formatted_key = base64.urlsafe_b64encode(hashlib.sha256(key).digest())
return Fernet(formatted_key)
class RedisChannelLayer(BaseRedisChannelLayer):
"""
Redis channel layer.
It routes all messages into remote Redis server. Support for
sharding among different Redis installations and message
encryption are provided. Both synchronous and asynchronous (via
Twisted) approaches are implemented.
"""
def __init__(
self,
expiry=60,
hosts=None,
prefix="asgi:",
group_expiry=86400,
capacity=100,
channel_capacity=None,
symmetric_encryption_keys=None,
stats_prefix="asgi-meta:",
connection_kwargs=None,
):
super(RedisChannelLayer, self).__init__(
expiry=expiry,
hosts=hosts,
prefix=prefix,
group_expiry=group_expiry,
capacity=capacity,
channel_capacity=channel_capacity,
symmetric_encryption_keys=symmetric_encryption_keys,
stats_prefix=stats_prefix,
connection_kwargs=connection_kwargs,
)
self.hosts = self._setup_hosts(hosts)
# Precalculate some values for ring selection
self.ring_size = len(self.hosts)
# Create connections ahead of time (they won't call out just yet, but
# we want to connection-pool them later)
self._connection_list = self._generate_connections(
self.hosts,
redis_kwargs=connection_kwargs or {},
)
self._receive_index_generator = itertools.cycle(range(len(self.hosts)))
self._send_index_generator = itertools.cycle(range(len(self.hosts)))
self._register_scripts()
### Setup ###
def _setup_hosts(self, hosts):
# Make sure they provided some hosts, or provide a default
final_hosts = list()
if not hosts:
hosts = [("localhost", 6379)]
if isinstance(hosts, six.string_types):
# user accidentally used one host string instead of providing a list of hosts
raise ValueError('ASGI Redis hosts must be specified as an iterable list of hosts.')
for entry in hosts:
if isinstance(entry, six.string_types):
final_hosts.append(entry)
else:
final_hosts.append("redis://%s:%d/0" % (entry[0], entry[1]))
return final_hosts
def _generate_connections(self, hosts, redis_kwargs):
return [
redis.Redis.from_url(host, **redis_kwargs)
for host in hosts
]
### Connection handling ####
def connection(self, index):
"""
Returns the correct connection for the current thread.
Pass key to use a server based on consistent hashing of the key value;
pass None to use a random server instead.
"""
# If index is explicitly None, pick a random server
if index is None:
index = self.random_index()
# Catch bad indexes
if not 0 <= index < self.ring_size:
raise ValueError("There are only %s hosts - you asked for %s!" % (self.ring_size, index))
return self._connection_list[index]
def random_index(self):
return random.randint(0, len(self.hosts) - 1)
### Flush extension ###
def flush(self):
"""
Deletes all messages and groups on all shards.
"""
for connection in self._connection_list:
self.delprefix(keys=[], args=[self.prefix + "*"], client=connection)
self.delprefix(keys=[], args=[self.stats_prefix + "*"], client=connection)
### Statistics extension ###
def global_statistics(self):
"""
Returns dictionary of statistics across all channels on all shards.
Return value is a dictionary with following fields:
* messages_count, the number of messages processed since server start
* channel_full_count, the number of times ChannelFull exception has been risen since server start
This implementation does not provide calculated per second values.
Due perfomance concerns, does not provide aggregated messages_pending and messages_max_age,
these are only avaliable per channel.
"""
return self._count_global_stats(self._connection_list)
def channel_statistics(self, channel):
"""
Returns dictionary of statistics for specified channel.
Return value is a dictionary with following fields:
* messages_count, the number of messages processed since server start
* messages_pending, the current number of messages waiting
* messages_max_age, how long the oldest message has been waiting, in seconds
* channel_full_count, the number of times ChannelFull exception has been risen since server start
This implementation does not provide calculated per second values
"""
if "!" in channel or "?" in channel:
connections = [self.connection(self.consistent_hash(channel))]
else:
# if we don't know where it is, we have to check in all shards
connections = self._connection_list
return self._count_channel_stats(channel, connections)
def __str__(self):
return "%s(hosts=%s)" % (self.__class__.__name__, self.hosts)

View File

@ -0,0 +1,72 @@
from __future__ import unicode_literals
from asgi_redis import RedisChannelLayer
class RedisLocalChannelLayer(RedisChannelLayer):
"""
Variant of the Redis channel layer that also uses a local-machine
channel layer instance to route all non-machine-specific messages
to a local machine, while using the Redis backend for all machine-specific
messages and group management/sends.
This allows the majority of traffic to go over the local layer for things
like http.request and websocket.receive, while still allowing Groups to
broadcast to all connected clients and keeping reply_channel names valid
across all workers.
"""
def __init__(self, expiry=60, prefix="asgi:", group_expiry=86400, capacity=100, channel_capacity=None, **kwargs):
# Initialise the base class
super(RedisLocalChannelLayer, self).__init__(
prefix = prefix,
expiry = expiry,
group_expiry = group_expiry,
capacity = capacity,
channel_capacity = channel_capacity,
**kwargs
)
# Set up our local transport layer as well
try:
from asgi_ipc import IPCChannelLayer
except ImportError:
raise ValueError("You must install asgi_ipc to use the local variant")
self.local_layer = IPCChannelLayer(
prefix = prefix.replace(":", ""),
expiry = expiry,
group_expiry = group_expiry,
capacity = capacity,
channel_capacity = channel_capacity,
)
### ASGI API ###
def send(self, channel, message):
# If the channel is "normal", use IPC layer, otherwise use Redis layer
if "!" in channel or "?" in channel:
return super(RedisLocalChannelLayer, self).send(channel, message)
else:
return self.local_layer.send(channel, message)
def receive(self, channels, block=False):
# Work out what kinds of channels are in there
num_remote = len([channel for channel in channels if "!" in channel or "?" in channel])
num_local = len(channels) - num_remote
# If they mixed types, force nonblock mode and query both backends, local first
if num_local and num_remote:
result = self.local_layer.receive(channels, block=False)
if result[0] is not None:
return result
return super(RedisLocalChannelLayer, self).receive(channels, block=block)
# If they just did one type, pass off to that backend
elif num_local:
return self.local_layer.receive(channels, block=block)
else:
return super(RedisLocalChannelLayer, self).receive(channels, block=block)
# new_channel always goes to Redis as it's always remote channels.
# Group APIs always go to Redis too.
def flush(self):
# Dispatch flush to both
super(RedisLocalChannelLayer, self).flush()
self.local_layer.flush()

View File

@ -0,0 +1,178 @@
from __future__ import unicode_literals
import itertools
import random
import redis
from redis.sentinel import Sentinel
import six
import time
from asgi_redis.core import BaseRedisChannelLayer
random.seed()
class RedisSentinelChannelLayer(BaseRedisChannelLayer):
"""
Variant of the Redis channel layer that supports the Redis Sentinel HA protocol.
Supports sharding, but assumes that there is only one sentinel cluster with multiple redis services
monitored by that cluster. So, this will only connect to a single cluster of Sentinel servers,
but will suppport sharding by asking that sentinel cluster for different services. Also, any redis connection
options (socket timeout, socket keepalive, etc) will be assumed to be identical across redis server, and
across all services.
"hosts" in this arrangement, is used to list the redis sentinel hosts. As such, it only supports the
tuple method of specifying hosts, as that's all the redis sentinel python library supports at the moment.
"services" is the list of redis services monitored by the sentinel system that redis keys will be distributed
across. If services is empty, this will fetch all services from Sentinel at initialization.
"""
def __init__(
self,
expiry=60,
hosts=None,
prefix="asgi:",
group_expiry=86400,
capacity=100,
channel_capacity=None,
symmetric_encryption_keys=None,
stats_prefix="asgi-meta:",
connection_kwargs=None,
services=None,
sentinel_refresh_interval=0,
):
super(RedisSentinelChannelLayer, self).__init__(
expiry=expiry,
hosts=hosts,
prefix=prefix,
group_expiry=group_expiry,
capacity=capacity,
channel_capacity=channel_capacity,
symmetric_encryption_keys=symmetric_encryption_keys,
stats_prefix=stats_prefix,
connection_kwargs=connection_kwargs,
)
# Master connection caching
self.sentinel_refresh_interval = sentinel_refresh_interval
self._last_sentinel_refresh = 0
self._master_connections = {}
connection_kwargs = connection_kwargs or {}
self._sentinel = Sentinel(self._setup_hosts(hosts), **connection_kwargs)
if services:
self._validate_service_names(services)
self.services = services
else:
self.services = self._get_service_names()
# Precalculate some values for ring selection
self.ring_size = len(self.services)
self._receive_index_generator = itertools.cycle(range(len(self.services)))
self._send_index_generator = itertools.cycle(range(len(self.services)))
self._register_scripts()
### Setup ###
def _setup_hosts(self, hosts):
# Override to only accept tuples, since the redis.sentinel.Sentinel does not accept URLs
if not hosts:
hosts = [("localhost", 26379)]
final_hosts = list()
if isinstance(hosts, six.string_types):
# user accidentally used one host string instead of providing a list of hosts
raise ValueError("ASGI Redis hosts must be specified as an iterable list of hosts.")
for entry in hosts:
if isinstance(entry, six.string_types):
raise ValueError("Sentinel Redis host entries must be specified as tuples, not strings.")
else:
final_hosts.append(entry)
return final_hosts
def _validate_service_names(self, services):
if isinstance(services, six.string_types):
raise ValueError("Sentinel service types must be specified as an iterable list of strings")
for entry in services:
if not isinstance(entry, six.string_types):
raise ValueError("Sentinel service types must be specified as strings.")
def _get_service_names(self):
"""
Get a list of service names from Sentinel. Tries Sentinel hosts until one succeeds; if none succeed,
raises a ConnectionError.
"""
master_info = None
connection_errors = []
for sentinel in self._sentinel.sentinels:
# Unfortunately, redis.sentinel.Sentinel does not support sentinel_masters, so we have to step
# through all of its connections manually
try:
master_info = sentinel.sentinel_masters()
break
except (redis.ConnectionError, redis.TimeoutError) as e:
connection_errors.append("Failed to connect to {}: {}".format(sentinel, e))
continue
if master_info is None:
raise redis.ConnectionError(
"Could not get master info from sentinel\n{}.".format("\n".join(connection_errors)))
return list(master_info.keys())
### Connection handling ####
def _master_for(self, service_name):
if self.sentinel_refresh_interval <= 0:
return self._sentinel.master_for(service_name)
else:
if (time.time() - self._last_sentinel_refresh) > self.sentinel_refresh_interval:
self._populate_masters()
return self._master_connections[service_name]
def _populate_masters(self):
self._master_connections = {service: self._sentinel.master_for(service) for service in self.services}
self._last_sentinel_refresh = time.time()
def connection(self, index):
# return the master for the given index
# If index is explicitly None, pick a random server
if index is None:
index = self.random_index()
# Catch bad indexes
if not 0 <= index < self.ring_size:
raise ValueError("There are only %s hosts - you asked for %s!" % (self.ring_size, index))
service_name = self.services[index]
return self._master_for(service_name)
def random_index(self):
return random.randint(0, len(self.services) - 1)
### Flush extension ###
def flush(self):
"""
Deletes all messages and groups on all shards.
"""
for service_name in self.services:
connection = self._master_for(service_name)
self.delprefix(keys=[], args=[self.prefix + "*"], client=connection)
self.delprefix(keys=[], args=[self.stats_prefix + "*"], client=connection)
### Statistics extension ###
def global_statistics(self):
connection_list = [self._master_for(service_name) for service_name in self.services]
return self._count_global_stats(connection_list)
def channel_statistics(self, channel):
if "!" in channel or "?" in channel:
connections = [self.connection(self.consistent_hash(channel))]
else:
# if we don't know where it is, we have to check in all shards
connections = [self._master_for(service_name) for service_name in self.services]
return self._count_channel_stats(channel, connections)
def __str__(self):
return "%s(services==%s)" % (self.__class__.__name__, self.services)

View File

@ -0,0 +1,19 @@
from __future__ import unicode_literals
try:
from twisted.internet import defer
except ImportError:
class defer(object):
"""
Fake "defer" object that allows us to use decorators in the main
class file but that errors when it's attempted to be invoked.
Used so you can import the client without Twisted but can't run
without it.
"""
@staticmethod
def inlineCallbacks(func):
def inner(*args, **kwargs):
raise NotImplementedError("Twisted is not installed")
return inner

View File

@ -0,0 +1,20 @@
Metadata-Version: 1.1
Name: msgpack-python
Version: 0.5.6
Summary: MessagePack (de)serializer.
Home-page: http://msgpack.org/
Author: INADA Naoki
Author-email: songofacandy@gmail.com
License: Apache 2.0
Description: This package is deprecated. Install msgpack instead.
Platform: UNKNOWN
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License

View File

@ -0,0 +1,28 @@
README.rst
msgpack/__init__.py
msgpack/_packer.cpp
msgpack/_unpacker.cpp
msgpack/_version.py
msgpack/exceptions.py
msgpack/fallback.py
msgpack_python.egg-info/PKG-INFO
msgpack_python.egg-info/SOURCES.txt
msgpack_python.egg-info/dependency_links.txt
msgpack_python.egg-info/top_level.txt
test/test_buffer.py
test/test_case.py
test/test_except.py
test/test_extension.py
test/test_format.py
test/test_limits.py
test/test_memoryview.py
test/test_newspec.py
test/test_obj.py
test/test_pack.py
test/test_read_size.py
test/test_seq.py
test/test_sequnpack.py
test/test_stricttype.py
test/test_subtype.py
test/test_unpack.py
test/test_unpack_raw.py

View File

@ -0,0 +1,14 @@
../msgpack/__init__.py
../msgpack/__pycache__/__init__.cpython-36.pyc
../msgpack/__pycache__/_version.cpython-36.pyc
../msgpack/__pycache__/exceptions.cpython-36.pyc
../msgpack/__pycache__/fallback.cpython-36.pyc
../msgpack/_packer.cpython-36m-x86_64-linux-gnu.so
../msgpack/_unpacker.cpython-36m-x86_64-linux-gnu.so
../msgpack/_version.py
../msgpack/exceptions.py
../msgpack/fallback.py
PKG-INFO
SOURCES.txt
dependency_links.txt
top_level.txt