tor-browser

The Tor Browser
git clone https://git.dasho.dev/tor-browser.git
Log | Files | Refs | README | LICENSE

kubernetes.rst (6799B)


      1 Deploy to Kubernetes
      2 ====================
      3 
      4 This guide describes how to deploy a websockets server to Kubernetes_. It
      5 assumes familiarity with Docker and Kubernetes.
      6 
      7 We're going to deploy a simple app to a local Kubernetes cluster and to ensure
      8 that it scales as expected.
      9 
     10 In a more realistic context, you would follow your organization's practices
     11 for deploying to Kubernetes, but you would apply the same principles as far as
     12 websockets is concerned.
     13 
     14 .. _Kubernetes: https://kubernetes.io/
     15 
     16 .. _containerize-application:
     17 
     18 Containerize application
     19 ------------------------
     20 
     21 Here's the app we're going to deploy. Save it in a file called
     22 ``app.py``:
     23 
     24 .. literalinclude:: ../../example/deployment/kubernetes/app.py
     25 
     26 This is an echo server with one twist: every message blocks the server for
     27 100ms, which creates artificial starvation of CPU time. This makes it easier
     28 to saturate the server for load testing.
     29 
     30 The app exposes a health check on ``/healthz``. It also provides two other
     31 endpoints for testing purposes: ``/inemuri`` will make the app unresponsive
     32 for 10 seconds and ``/seppuku`` will terminate it.
     33 
     34 The quest for the perfect Python container image is out of scope of this
     35 guide, so we'll go for the simplest possible configuration instead:
     36 
     37 .. literalinclude:: ../../example/deployment/kubernetes/Dockerfile
     38 
     39 After saving this ``Dockerfile``, build the image:
     40 
     41 .. code-block:: console
     42 
     43    $ docker build -t websockets-test:1.0 .
     44 
     45 Test your image by running:
     46 
     47 .. code-block:: console
     48 
     49    $ docker run --name run-websockets-test --publish 32080:80 --rm \
     50        websockets-test:1.0
     51 
     52 Then, in another shell, in a virtualenv where websockets is installed, connect
     53 to the app and check that it echoes anything you send:
     54 
     55 .. code-block:: console
     56 
     57    $ python -m websockets ws://localhost:32080/
     58    Connected to ws://localhost:32080/.
     59    > Hey there!
     60    < Hey there!
     61    >
     62 
     63 Now, in yet another shell, stop the app with:
     64 
     65 .. code-block:: console
     66 
     67    $ docker kill -s TERM run-websockets-test
     68 
     69 Going to the shell where you connected to the app, you can confirm that it
     70 shut down gracefully:
     71 
     72 .. code-block:: console
     73 
     74    $ python -m websockets ws://localhost:32080/
     75    Connected to ws://localhost:32080/.
     76    > Hey there!
     77    < Hey there!
     78    Connection closed: 1001 (going away).
     79 
     80 If it didn't, you'd get code 1006 (abnormal closure).
     81 
     82 Deploy application
     83 ------------------
     84 
     85 Configuring Kubernetes is even further beyond the scope of this guide, so
     86 we'll use a basic configuration for testing, with just one Service_ and one
     87 Deployment_:
     88 
     89 .. literalinclude:: ../../example/deployment/kubernetes/deployment.yaml
     90 
     91 For local testing, a service of type NodePort_ is good enough. For deploying
     92 to production, you would configure an Ingress_.
     93 
     94 .. _Service: https://kubernetes.io/docs/concepts/services-networking/service/
     95 .. _Deployment: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
     96 .. _NodePort: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
     97 .. _Ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/
     98 
     99 After saving this to a file called ``deployment.yaml``, you can deploy:
    100 
    101 .. code-block:: console
    102 
    103    $ kubectl apply -f deployment.yaml
    104    service/websockets-test created
    105    deployment.apps/websockets-test created
    106 
    107 Now you have a deployment with one pod running:
    108 
    109 .. code-block:: console
    110 
    111    $ kubectl get deployment websockets-test
    112    NAME              READY   UP-TO-DATE   AVAILABLE   AGE
    113    websockets-test   1/1     1            1           10s
    114    $ kubectl get pods -l app=websockets-test
    115    NAME                               READY   STATUS    RESTARTS   AGE
    116    websockets-test-86b48f4bb7-nltfh   1/1     Running   0          10s
    117 
    118 You can connect to the service — press Ctrl-D to exit:
    119 
    120 .. code-block:: console
    121 
    122    $ python -m websockets ws://localhost:32080/
    123    Connected to ws://localhost:32080/.
    124    Connection closed: 1000 (OK).
    125 
    126 Validate deployment
    127 -------------------
    128 
    129 First, let's ensure the liveness probe works by making the app unresponsive:
    130 
    131 .. code-block:: console
    132 
    133    $ curl http://localhost:32080/inemuri
    134    Sleeping for 10s
    135 
    136 Since we have only one pod, we know that this pod will go to sleep.
    137 
    138 The liveness probe is configured to run every second. By default, liveness
    139 probes time out after one second and have a threshold of three failures.
    140 Therefore Kubernetes should restart the pod after at most 5 seconds.
    141 
    142 Indeed, after a few seconds, the pod reports a restart:
    143 
    144 .. code-block:: console
    145 
    146    $ kubectl get pods -l app=websockets-test
    147    NAME                               READY   STATUS    RESTARTS   AGE
    148    websockets-test-86b48f4bb7-nltfh   1/1     Running   1          42s
    149 
    150 Next, let's take it one step further and crash the app:
    151 
    152 .. code-block:: console
    153 
    154    $ curl http://localhost:32080/seppuku
    155    Terminating
    156 
    157 The pod reports a second restart:
    158 
    159 .. code-block:: console
    160 
    161    $ kubectl get pods -l app=websockets-test
    162    NAME                               READY   STATUS    RESTARTS   AGE
    163    websockets-test-86b48f4bb7-nltfh   1/1     Running   2          72s
    164 
    165 All good — Kubernetes delivers on its promise to keep our app alive!
    166 
    167 Scale deployment
    168 ----------------
    169 
    170 Of course, Kubernetes is for scaling. Let's scale — modestly — to 10 pods:
    171 
    172 .. code-block:: console
    173 
    174    $ kubectl scale deployment.apps/websockets-test --replicas=10
    175    deployment.apps/websockets-test scaled
    176 
    177 After a few seconds, we have 10 pods running:
    178 
    179 .. code-block:: console
    180 
    181    $ kubectl get deployment websockets-test
    182    NAME              READY   UP-TO-DATE   AVAILABLE   AGE
    183    websockets-test   10/10   10           10          10m
    184 
    185 Now let's generate load. We'll use this script:
    186 
    187 .. literalinclude:: ../../example/deployment/kubernetes/benchmark.py
    188 
    189 We'll connect 500 clients in parallel, meaning 50 clients per pod, and have
    190 each client send 6 messages. Since the app blocks for 100ms before responding,
    191 if connections are perfectly distributed, we expect a total run time slightly
    192 over 50 * 6 * 0.1 = 30 seconds.
    193 
    194 Let's try it:
    195 
    196 .. code-block:: console
    197 
    198    $ ulimit -n 512
    199    $ time python benchmark.py 500 6
    200    python benchmark.py 500 6  2.40s user 0.51s system 7% cpu 36.471 total
    201 
    202 A total runtime of 36 seconds is in the right ballpark. Repeating this
    203 experiment with other parameters shows roughly consistent results, with the
    204 high variability you'd expect from a quick benchmark without any effort to
    205 stabilize the test setup.
    206 
    207 Finally, we can scale back to one pod.
    208 
    209 .. code-block:: console
    210 
    211    $ kubectl scale deployment.apps/websockets-test --replicas=1
    212    deployment.apps/websockets-test scaled
    213    $ kubectl get deployment websockets-test
    214    NAME              READY   UP-TO-DATE   AVAILABLE   AGE
    215    websockets-test   1/1     1            1           15m