Skip to content

Commit 0fe1d48

Browse files
nektos/actactions-user
authored andcommitted
Apply automatic release changes for v0.11.0
1 parent 6714f3a commit 0fe1d48

File tree

1 file changed

+17
-17
lines changed

1 file changed

+17
-17
lines changed

README.md

Lines changed: 17 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ The workspace requires **Docker** to be installed on your machine ([📖 Install
5757
Deploying a single workspace instance is as simple as:
5858

5959
```bash
60-
docker run -p 8080:8080 mltooling/ml-workspace:0.10.4
60+
docker run -p 8080:8080 mltooling/ml-workspace:0.11.0
6161
```
6262

6363
Voilà, that was easy! Now, Docker will pull the latest workspace image to your machine. This may take a few minutes, depending on your internet speed. Once the workspace is started, you can access it via http://localhost:8080.
@@ -74,7 +74,7 @@ docker run -d \
7474
--env AUTHENTICATE_VIA_JUPYTER="mytoken" \
7575
--shm-size 512m \
7676
--restart always \
77-
mltooling/ml-workspace:0.10.4
77+
mltooling/ml-workspace:0.11.0
7878
```
7979

8080
This command runs the container in background (`-d`), mounts your current working directory into the `/workspace` folder (`-v`), secures the workspace via a provided token (`--env AUTHENTICATE_VIA_JUPYTER`), provides 512MB of shared memory (`--shm-size`) to prevent unexpected crashes (see [known issues section](#known-issues)), and keeps the container running even on system restarts (`--restart always`). You can find additional options for docker run [here](https://docs.docker.com/engine/reference/commandline/run/) and workspace configuration options in [the section below](#Configuration).
@@ -181,7 +181,7 @@ We strongly recommend enabling authentication via one of the following two optio
181181
Activate the token-based authentication based on the authentication implementation of Jupyter via the `AUTHENTICATE_VIA_JUPYTER` variable:
182182

183183
```bash
184-
docker run -p 8080:8080 --env AUTHENTICATE_VIA_JUPYTER="mytoken" mltooling/ml-workspace:0.10.4
184+
docker run -p 8080:8080 --env AUTHENTICATE_VIA_JUPYTER="mytoken" mltooling/ml-workspace:0.11.0
185185
```
186186

187187
You can also use `<generated>` to let Jupyter generate a random token that is printed out on the container logs. A value of `true` will not set any token but activate that every request to any tool in the workspace will be checked with the Jupyter instance if the user is authenticated. This is used for tools like JupyterHub, which configures its own way of authentication.
@@ -191,7 +191,7 @@ You can also use `<generated>` to let Jupyter generate a random token that is pr
191191
Activate the basic authentication via the `WORKSPACE_AUTH_USER` and `WORKSPACE_AUTH_PASSWORD` variable:
192192

193193
```bash
194-
docker run -p 8080:8080 --env WORKSPACE_AUTH_USER="user" --env WORKSPACE_AUTH_PASSWORD="pwd" mltooling/ml-workspace:0.10.4
194+
docker run -p 8080:8080 --env WORKSPACE_AUTH_USER="user" --env WORKSPACE_AUTH_PASSWORD="pwd" mltooling/ml-workspace:0.11.0
195195
```
196196

197197
The basic authentication is configured via the nginx proxy and might be more performant compared to the other option since with `AUTHENTICATE_VIA_JUPYTER` every request to any tool in the workspace will check via the Jupyter instance if the user (based on the request cookies) is authenticated.
@@ -212,7 +212,7 @@ docker run \
212212
-p 8080:8080 \
213213
--env WORKSPACE_SSL_ENABLED="true" \
214214
-v /path/with/certificate/files:/resources/ssl:ro \
215-
mltooling/ml-workspace:0.10.4
215+
mltooling/ml-workspace:0.11.0
216216
```
217217

218218
If you want to host the workspace on a public domain, we recommend to use [Let's encrypt](https://letsencrypt.org/getting-started/) to get a trusted certificate for your domain. To use the generated certificate (e.g., via [certbot](https://certbot.eff.org/) tool) for the workspace, the `privkey.pem` corresponds to the `cert.key` file and the `fullchain.pem` to the `cert.crt` file.
@@ -233,7 +233,7 @@ By default, the workspace container has no resource constraints and can use as m
233233
For example, the following command restricts the workspace to only use a maximum of 8 CPUs, 16 GB of memory, and 1 GB of shared memory (see [Known Issues](#known-issues)):
234234

235235
```bash
236-
docker run -p 8080:8080 --cpus=8 --memory=16g --shm-size=1G mltooling/ml-workspace:0.10.4
236+
docker run -p 8080:8080 --cpus=8 --memory=16g --shm-size=1G mltooling/ml-workspace:0.11.0
237237
```
238238

239239
> 📖 _For more options and documentation on resource constraints, please refer to the [official docker guide](https://docs.docker.com/config/containers/resource_constraints/)._
@@ -262,7 +262,7 @@ In addition to the main workspace image (`mltooling/ml-workspace`), we provide o
262262
The minimal flavor (`mltooling/ml-workspace-minimal`) is our smallest image that contains most of the tools and features described in the [features section](#features) without most of the python libraries that are pre-installed in our main image. Any Python library or excluded tool can be installed manually during runtime by the user.
263263

264264
```bash
265-
docker run -p 8080:8080 mltooling/ml-workspace-minimal:0.10.4
265+
docker run -p 8080:8080 mltooling/ml-workspace-minimal:0.11.0
266266
```
267267
</details>
268268

@@ -280,7 +280,7 @@ docker run -p 8080:8080 mltooling/ml-workspace-minimal:0.10.4
280280
The R flavor (`mltooling/ml-workspace-r`) is based on our default workspace image and extends it with the R-interpreter, R-Jupyter kernel, RStudio server (access via `Open Tool -> RStudio`), and a variety of popular packages from the R ecosystem.
281281

282282
```bash
283-
docker run -p 8080:8080 mltooling/ml-workspace-r:0.10.4
283+
docker run -p 8080:8080 mltooling/ml-workspace-r:0.11.0
284284
```
285285
</details>
286286

@@ -298,7 +298,7 @@ docker run -p 8080:8080 mltooling/ml-workspace-r:0.10.4
298298
The Spark flavor (`mltooling/ml-workspace-spark`) is based on our R-flavor workspace image and extends it with the Spark-interpreter, Spark-Jupyter kernel (Apache Toree), Zeppelin Notebook (access via `Open Tool -> Zeppelin`), and a few additional python libraries & Jupyter extensions.
299299

300300
```bash
301-
docker run -p 8080:8080 mltooling/ml-workspace-spark:0.10.4
301+
docker run -p 8080:8080 mltooling/ml-workspace-spark:0.11.0
302302
```
303303

304304
</details>
@@ -322,13 +322,13 @@ The GPU flavor (`mltooling/ml-workspace-gpu`) is based on our default workspace
322322
- (Docker >= 19.03) Nvidia Container Toolkit ([📖 Instructions](https://github.com/NVIDIA/nvidia-docker/wiki/Installation-(Native-GPU-Support))).
323323

324324
```bash
325-
docker run -p 8080:8080 --gpus all mltooling/ml-workspace-gpu:0.10.4
325+
docker run -p 8080:8080 --gpus all mltooling/ml-workspace-gpu:0.11.0
326326
```
327327

328328
- (Docker < 19.03) Nvidia Docker 2.0 ([📖 Instructions](https://github.com/NVIDIA/nvidia-docker/wiki/Installation-(version-2.0))).
329329

330330
```bash
331-
docker run -p 8080:8080 --runtime nvidia --env NVIDIA_VISIBLE_DEVICES="all" mltooling/ml-workspace-gpu:0.10.4
331+
docker run -p 8080:8080 --runtime nvidia --env NVIDIA_VISIBLE_DEVICES="all" mltooling/ml-workspace-gpu:0.11.0
332332
```
333333

334334
The GPU flavor also comes with a few additional configuration options, as explained below:
@@ -367,7 +367,7 @@ The workspace is designed as a single-user development environment. For a multi-
367367
ML Hub makes it easy to set up a multi-user environment on a single server (via Docker) or a cluster (via Kubernetes) and supports a variety of usage scenarios & authentication providers. You can try out ML Hub via:
368368

369369
```bash
370-
docker run -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock mltooling/ml-hub:0.10.4
370+
docker run -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock mltooling/ml-hub:0.11.0
371371
```
372372

373373
For more information and documentation about ML Hub, please take a look at the [Github Site](https://github.com/ml-tooling/ml-hub).
@@ -726,7 +726,7 @@ To run Python code as a job, you need to provide a path or URL to a code directo
726726
You can execute code directly from Git, Mercurial, Subversion, or Bazaar by using the pip-vcs format as described in [this guide](https://pip.pypa.io/en/stable/reference/pip_install/#vcs-support). For example, to execute code from a [subdirectory](https://github.com/ml-tooling/ml-workspace/tree/main/resources/tests/ml-job) of a git repository, just run:
727727

728728
```bash
729-
docker run --env EXECUTE_CODE="git+https://github.com/ml-tooling/ml-workspace.git#subdirectory=resources/tests/ml-job" mltooling/ml-workspace:0.10.4
729+
docker run --env EXECUTE_CODE="git+https://github.com/ml-tooling/ml-workspace.git#subdirectory=resources/tests/ml-job" mltooling/ml-workspace:0.11.0
730730
```
731731

732732
> 📖 _For additional information on how to specify branches, commits, or tags please refer to [this guide](https://pip.pypa.io/en/stable/reference/pip_install/#vcs-support)._
@@ -736,7 +736,7 @@ docker run --env EXECUTE_CODE="git+https://github.com/ml-tooling/ml-workspace.gi
736736
In the following example, we mount and execute the current working directory (expected to contain our code) into the `/workspace/ml-job/` directory of the workspace:
737737

738738
```bash
739-
docker run -v "${PWD}:/workspace/ml-job/" --env EXECUTE_CODE="/workspace/ml-job/" mltooling/ml-workspace:0.10.4
739+
docker run -v "${PWD}:/workspace/ml-job/" --env EXECUTE_CODE="/workspace/ml-job/" mltooling/ml-workspace:0.11.0
740740
```
741741

742742
#### Install Dependencies
@@ -762,7 +762,7 @@ python /resources/scripts/execute_code.py /path/to/your/job
762762
It is also possible to embed your code directly into a custom job image, as shown below:
763763

764764
```dockerfile
765-
FROM mltooling/ml-workspace:0.10.4
765+
FROM mltooling/ml-workspace:0.11.0
766766

767767
# Add job code to image
768768
COPY ml-job /workspace/ml-job
@@ -827,7 +827,7 @@ The workspace can be extended in many ways at runtime, as explained [here](#exte
827827

828828
```dockerfile
829829
# Extend from any of the workspace versions/flavors
830-
FROM mltooling/ml-workspace:0.10.4
830+
FROM mltooling/ml-workspace:0.11.0
831831

832832
# Run you customizations, e.g.
833833
RUN \
@@ -1075,7 +1075,7 @@ import sys
10751075
Certain desktop tools (e.g., recent versions of [Firefox](https://github.com/jlesage/docker-firefox#increasing-shared-memory-size)) or libraries (e.g., Pytorch - see Issues: [1](https://github.com/pytorch/pytorch/issues/2244), [2](https://github.com/pytorch/pytorch/issues/1355)) might crash if the shared memory size (`/dev/shm`) is too small. The default shared memory size of Docker is 64MB, which might not be enough for a few tools. You can provide a higher shared memory size via the `shm-size` docker run option:
10761076

10771077
```bash
1078-
docker run --shm-size=2G mltooling/ml-workspace:0.10.4
1078+
docker run --shm-size=2G mltooling/ml-workspace:0.11.0
10791079
```
10801080

10811081
</details>

0 commit comments

Comments
 (0)