mpy-metapackage¶
Muppy Metapackage is an helm package designed to deploy a large set of (Docker) Applications on Kubernetes using Muppy.
mpy-metapackage is installed by default in all Kubernetes Clusters installed by Muppy.
mpy-metapackage contains all the logic required to control the application using muppy GUI. For example, the network and security can be configured in a few clicks in the Package Release Network Tab.
<
With mpy-metapackage, you describe how a set of parts must be deployed and you control them using muppy GUI.
mpy-metapackage allows to defines 3 kinds of parts:
serverbased parts use k8s deployments and are expected to define http entry pointsworkerbased parts use k8s deployments and don't deploy network entry pointscronjobare defined using Kubernetes CronJobhttpfor parts that defines only routes to others workloadq (see mpy-Traefik-Dashbaord-Ingress example)
mpy-metapackage benefits from muppy's kubernetes setup:
- Network Ingress routing
- Network filtering
- Debug mode
- Dashboard
mpy-metapackage can be used directly or as a sub chart.
mpy-metapackage Capabilities¶
- Network
- Setup
- Security
- Debug Mode
- Dashboard
- CronJobs
Debug Mode¶
Debug mode allows to reconfigure the Pod to run a specified command useful for debugging.
It can be as simple as sleep or a complete VSCode Server instance.
Usage¶
Package Release with Package: mpy-metapackage
Warning
mpy_meta_config content must be static and not rely the templating system since it is parsed "unrendered".
'mpy_meta_config' reference¶
mpy_meta_config:
parts: # m2p deploys parts. Each m2p must have at least on part. Generaly a part is linked to a docker image.
{{part_code}}: # a meaningfull code for the part !!! part_code len must be <= 8 chars !!!!
#type: server # [OPTIONAL] `server` | `worker` | `cronjob` | 'http' (default when unspecified is `server`)
deployment: # deployment gather all info to setup a kubernetes Deployment.
#docker_image: ... # [OPTIONAL] By default deployment images is defined in profile. This allows to force alternate image for this part
#command: [] # [OPTIONAL] Warnng ! When supplied kubernetes command overwrite the docker entry_point. So unless you want to change the entry point, you should use args !
#args: [] # [OPTIONAL]
env_vars: # [OPTIONAL] deployment's additional ENV Vars can be defined here. Note that common_env_vars defined in values are always passed to each part.
- name: EXAMPLE_ENV_VAR
value: "any value"
resources: # [optional] Default resources requests and limits must be defined here. If setup, user will be able to adjust theses values using dashboard.
requests:
cpu: 0.1
memory: "200M"
limits:
cpu: 0.3
memory: "300M"
replica_count: 2 # [REQUIRED] This defines the default value for replica which can be modified using dashboard.
# !!! deployment must contains at least one empty property. eg env_Vars or resources
ports: # if part exposes ports, they must defined here if user want to setup routes to allow access
- name: http # each port must be name
port: 80 # and identified
# Debug Mode setup
# To use you must set debug_mode_available=null | basic | codr to make debug mode available
debug_mode_available: null | basic | codr # [optional] it can be completely omitted if null
# Then you must define debug_config
# Finally debug mode is activated using Dashboard, so you must setup a dashbaord to switch debug_mode on / off
# Debug Mode is switched on with $.Values.dashboards.[part].debug_mode
# Example configuration for "basic" debug mode
debug_config:
#command: [] # [OPTIONAL] Warnng ! When supplied kubernetes command overwrite the docker entry_point. So unless you want to change the entry point, you should use args !
args: ["sleep", "infinity"]
env_vars:
#- name: VARNAME
# value: "a value"
resources: # Here you can set different 'resources' applied in debug_mode
# Example configuration for "codr" debug mode
debug_config:
debug_fqdn_prefix: "codr-" # only used when debug_mode_available: "codr"
#command: [] # [OPTIONAL] Warnng ! When supplied kubernetes command overwrite the docker entry_point. So unless you want to change the entry point, you should use args !
args: [
"/usr/bin/code-server",
"--bind-addr=0.0.0.0:8765",
"--auth=password"
]
env_vars:
# To define code-server password, you can set ip here.
# But if a dashboard is defined, muppy will use GUI defined value in `debug_env_vars``
#- name: PASSWORD
# value: "291902"
resources: # Here you can set different 'resources' applied in debug_mode.
requests:
cpu: 0.5
memory: "800M"
limits:
cpu: 2
memory: "1500M"
# When a package defines only one part, it's FQDN is the one of the package.
# but when a package contains several servers, this allows to generate distinct
# routes by combining hostPrefix with package FQDN
# For example:
# - Given main FQDN: myapp.domain.ext
# - part with hostPrefix 'api-' will be bound to route ob FQDN: api-myapp.domain.ext
# hostPrefix is optional.
hostPrefix: "api-" # [OPTIONAL] see above
routes: # allows to bind URL path to ports with defined security
- name: gui # each route must be named
pathPrefix: PathPrefix(`/`, `/path2`) # and defined one one more Path Prefix
port_name: "http" # Select one of the ports above
middlewares: # And define the middle that can be activated by user in the Tab Network of the Package release
ipwhitelist: true # Warning this does not activate the middleware, but defines whether it can be activated and configured by the user
basicauth: true # Warning this does not activate the middleware, but defines whether it can be activated and configured by the user
toctoc: true # Warning this does not activate the middleware, but defines whether it can be activated and configured by the user
- name: codr # codr is a special route that must defined like this when debug_mode_available is set to codr
pathPrefix: PathPrefix(`/`)
port: codr
debug_mode: true
middlewares:
ipwhitelist: true
# Up to 2.3.11
dashboard: # When dashboard is defined, a dash will be created with code = {{part_name}}
name: Fancy name # and this name will be used as label
# since 2.3.12
dashboard_name: Fancy name
'http' parts¶
This kind of parts allows to securely expose an existing Kubernetes service. For example of 'http' parts you can take a look at: - mpy-kubernetes-dashboard-ingress - Traefik Dashboard Ingress
via Menu k8s / K8s Packages
'server' parts¶
'server' is the default type for parts. Parts of type 'server' are long running processes that exposes some network ports eventually available over https on some paths.
Server parts are structured around 3 main parts:
- deployment
- ports
- routes [Optional for server available only inside the cluster]
Routes¶
A server can have multiple routes bound to different ports.
...
routes: # allows to bind URL path to ports with defined security
- name: gui # each route must be named
pathPrefix: PathPrefix(`/`, `/path2`) # and defined one one more Path Prefix
port_name: "http" # Select one of the ports above
middlewares: # And define the middle that can be activated by user in the Tab Network of the Package release
ipwhitelist: true # Warning this does not activate the middleware, but defines whether it can be activated and configured by the user
basicauth: true # Warning this does not activate the middleware, but defines whether it can be activated and configured by the user
toctoc: true # Warning this does not activate the middleware, but defines whether it can be activated and configured by the user
- name: codr # codr is a special route that must defined like this when debug_mode_available is set to codr
pathPrefix: PathPrefix(`/`)
port: codr
debug_mode: true
middlewares:
ipwhitelist: true
Stickiness¶
Available from 2.3.15
Keep in ming, Muppy leverages Traefik IngresRoutes not standard Kubernetes Ingress.
Ref: https://doc.traefik.io/traefik/routing/providers/kubernetes-ingress/#stickiness-and-load-balancing
...
routes: # allows to bind URL path to ports with defined security
- name: gui # each route must be named
pathPrefix: PathPrefix(`/`, `/path2`) # and defined one one more Path Prefix
port_name: "http" # Select one of the ports above
#####( Please refer to Trafik documentation above for Stickiness options )####
sticky:
cookie:
httpOnly: true
name: SESSION
# secure: true
# sameSite: none
# maxAge: 42
# path: /foo
#############
middlewares: # And define the middle that can be activated by user in the Tab Network of the Package release
ipwhitelist: true # Warning this does not activate the middleware, but defines whether it can be activated and configured by the user
basicauth: true # Warning this does not activate the middleware, but defines whether it can be activated and configured by the user
toctoc: true # Warning this does not activate the middleware, but defines whether it can be activated and configured by the user
Probes¶
Available from 2.3.10
m2p supports standard kubernetes probes (ready, live and startup).
Parts of type deploymentsupports the 3 kind of probes.
Parts of type cronjobsupports only startupProbe.
- To configure, use pure kubernetes syntax (see Kubernetes doc)
- Probes are part specific and must be defined in each part.
- Prefix or suffix probe to disable eg. 'disabled_readinessProbe'
Example:
mpy_meta_config:
parts:
httpbin: # This is a part
...
readinessProbe:
httpGet:
path: /
port: 80
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 8
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
startupProbe:
httpGet:
path: /
port: 80
scheme: HTTP
initialDelaySeconds: 120
periodSeconds: 10
failureThreshold: 294
timeoutSeconds: 8
disabled_livenessProbe: # <== example of disabled probe
httpGet:
path: /
port: 80
scheme: HTTP
initialDelaySeconds: 120 # default 5 -
periodSeconds: 10 # first startup is about 23min for now - following ones are about 3 min
failureThreshold: 294 # 294 * 10s = 2940s (48 min max + 2 min initial Delay)
timeoutSeconds: 8 # default: 1 - number of seconds after which the startup probe times out.
Volumes¶
Available from 2.3.11
m2p allows to define a set of volumes that can be mounted in parts.
Each part can have it's own volume or you can share some volumes amoung parts.
Volumes setup spawns 2 operations:
- Volume definition
- Mount volume in parts
You can adjust securityContext at the part level.
Warning
Please be aware that volumes are creted using helm.sh/resource-policy: keep, hence they are removed only when the namespace is destroyed not when the chart is deleted.
Define volumes¶
#
# Values defines `default_storage_class` from K8s Cluster
#default_storage_class:
#default_storage_class: csi-cinder-high-speed
default_storage_class: microk8s-hostpath
mpy_meta_config:
parts:
...
# here is the volume definition
volumesDefinition:
- name: axelor-data # [REQUIRED]
description: axelor main volume # [Optional]
claim: # [REQUIRED]
storageClassName: # [OPTIONAL] default is values.default_storage_class above
#accessModes: ReadWriteOnce # [OPTIONAL] Kubernetes Access Modes / default is ReadWriteOnce
size: 2Gi # [REQUIRED]
Quick Summary of Kubernetes Access Modes
| Mode | Description |
|---|---|
| ReadWriteOnce (RWO) | Read/write by a single node |
| ReadOnlyMany (ROX) | Read-only by multiple nodes |
| ReadWriteMany (RWX) | Read/write by multiple nodes |
| ReadWriteOncePod | Since Kubernetes 1.22: read/write by a single pod (more restrictive than RWO) |
Mount volumes¶
To mount a Volume in part(s), specify it's name and mount path as below.
#
# Values defines `default_storage_class` from K8s Cluster
#default_storage_class:
#default_storage_class: csi-cinder-high-speed
default_storage_class: microk8s-hostpath
mpy_meta_config:
parts:
...
my-part:
# Here you define the volumes to mount
volumes:
- name: axelor-data # [REQUIRED] must be a name defined in volumesDefinition below
mountPath: "/axelor-data" # [REQUIRED] where to mount the volume
readOnly: # [OPTIONAL] default is false
...
# here is the volume definition
volumesDefinition:
- name: axelor-data # [REQUIRED]
description: axelor main volume # [Optional]
claim: # [REQUIRED]
storageClassName: # [OPTIONAL] default is values.default_storage_class above
#accessModes: ReadWriteOnce # [OPTIONAL] default is ReadWriteOnce
size: 2Gi # [REQUIRED]
emptyDir¶
- Available from 2.3.16 *
To use emptyDir, you just have to define the volume (You must not define it)
mpy-meta-config:
workers:
type: worker
dashboard_name: Workers
deployment: # !required
args:
- --workers=1
- --no-http
- --limit-time-cpu=31536000
- --max-cron-threads=4
env_vars:
- name: INOUK_SESSION_STORE
value: postgresql
resources:
...
volumes:
# example 1
- name: tmp-storage
emptyDir: {}
mountPath: /opt/muppy/tmp # [REQUIRED] where to mount the volume
#readOnly: false # [OPTIONAL] default is false
# example 2 (with emptyDir options) # or
- name: tmp-storage-with-options
emptyDir:
medium: "Memory"
mountPath: /opt/muppy/tmp # [REQUIRED] where to mount the volume
#readOnly: false # [OPTIONAL] default is false
Adjust securityContext example¶
You can enter a pure k8s securityContext at the part level. See (Kubernetes Doc.)[https://kubernetes.io/docs/tasks/configure-pod-container/security-context/]
#
# Values defines `default_storage_class` from K8s Cluster
#default_storage_class:
#default_storage_class: csi-cinder-high-speed
default_storage_class: microk8s-hostpath
mpy_meta_config:
parts:
...
my-part:
# Here you define the volumes to mount
volumes:
- name: axelor-data # [REQUIRED] must be a name defined in volumesDefinition below
mountPath: "/axelor-data" # [REQUIRED] where to mount the volume
readOnly: # [OPTIONAL] default is false
securityContext: # <=== Enter a std securityContext here
fsGroup: 1001
#runAsUser: 1001
#runAsGroup: 3000
...
# here is the volume definition
volumesDefinition:
- name: axelor-data # [REQUIRED]
description: axelor main volume # [Optional]
claim: # [REQUIRED]
storageClassName: # [OPTIONAL] default is values.default_storage_class above
#accessModes: ReadWriteOnce # [OPTIONAL] default is ReadWriteOnce
size: 2Gi # [REQUIRED]
ConfigMaps and Secret¶
Available from 2.3.15
m2p allows to define ConfigMaps and Secret and use them in Containers.
ConfigMaps and Secrets are defined using Vault objects stored in Muppy Vault
Setup workflow is:
- Declare Vault objects in
mpy_meta_config.vault_objectssection - Define Volumes to mount ConfigMap in
mpy_meta_config.parts.{part}as file. - Create a Vault object to store file content
Declare Vault objects¶
ConfigMap and Secrets are declared in mpy_meta_config.vault_objects.
# Here is where you configure your App
# You should use a distinct file. Convention is to name it 'mpy_meta_config.yaml'.
# If you need you can inject all app config here.
mpy_meta_config:
parts: # m2p deploys parts. Each m2p must have at least on part. Generaly a part is linked to a docker image.
# ...
# Vault objects declaration
vault_objects:
- name: keycloak-realm-data
scope: instance # OPTIONAL instance or qualifier (default when undef is qualifier)
type: configmap # OPTIONAL: configmap or secret (default when undef is configmap)
- name: keycloak-conf
#scope: qualifier # OPTIONAL instance | qualifier = (default when undef)
- name: xtraho-service-env
#scope: qualifier # OPTIONAL instance | qualifier = (default when undef)
- name: consul-creds
type: secret
name and scope are used to compute the code of a Vault object:
-
scope qualifier: This scope generates a vault object linked to the 'qualified app' (eg. myapp-dev or myapp-prod)
- "{k8s_app_code}-{package_qualifier_id.short_alias}-{type}-{config_map_name}"
- eg: mpy13c-dev-cfgmap-keycloak-conf
-
scope instance: This scope generates a vault object linked to the helm-instance of the qualified app. (eg. myapp-dev-instance0 )
- "{k8s_app_code}-{package_qualifier_id.short_alias}-{record.helm_instance_name}-type-{config_map_name}"
- eg: mpy13c-dev-oursbleu-secret-consul-creds
Once you're done with your Parts config. Click on the 'Sync Parts config.' button. This will update the 'Vault Objects' tab of the 'k8s Installed Package'.
Create Vaults object to store file content¶
The 'Vault Objects' tab of the 'k8s Installed Package' displays all the 'Vault Objects' defined in the 'Part config' tab.
Click on the the 'Create / Edit' button to create the 'Vault object' using information extracted from the Declaration. This will create an empty Vault object. Complete it's content. For Config Map, you can choose the Sub type (Config or ENV file)
Note that:
- code: Vault object code is automatically let based on naming rule above. code is used to bind Vault object with Package.
- name: is used as the name of the file for Config file
Use Vault objects¶
Once declared, Vault objects can be used several ways:
- configmaps can be used as Config file or sourced as an ENV file
- secrets can be sourced as an ENV file or mounted as individual files
Use ConfigMaps or Secrets as ENV Files¶
Use the enfFrom section of a part.
Each key=value line of ConfigMaps or Secrets having Sub type='ENV file' will be sourced as an ENV Var.
mpy_meta_config:
parts: # m2p deploys parts. Each m2p must have at least on part. Generaly a part is linked to a docker image.
myapp:
...
envFrom: # !!! Warning !! Level is just below part (not in Deployment)
- configMapRef: # to mount a ConfigMap Vault object as ENV File
name: xtraho-service-env
- secretRef: # to mount a Secret Vault object as ENV File
name: consul-creds
#
# Syntax Warning
# Dont do this !!!! Indentation of name will generate a different object
#- secretRef: # [{"secretRef": {"name": "consul-creds"}}]
# name: consul-creds
# Do this !!!! Indentation of name will generate a different object
#- secretRef: # [{"secretRef": null, "name": "consul-creds"}]
# name: consul-creds
Use ConfigMap as Config files¶
ConfigMap with Sub type:'Config file (one)' can be mounted as a file (eg keycloack.conf)
Use the volume section of a part:
Note the use of subPath to inject ConfigFile in folder without masking / reseting mount folder content.
Without subPath, mountPath will contain only the Vault object.
mpy_meta_config:
parts:
kcloak:
...
volumes:
# For configMaps
- name: keycloak-realm-data
mountPath: /opt/keycloak/data/import # This will mount the ConfigMap as a folder content.
- name: keycloak-conf
mountPath: /opt/keycloak/conf/
subPath: keycloak.conf # This will mount the ConfigMap as a file !.
# For Secrets - you cannot use mountPath.
# Each secret will be mounted excactly as one file.
# For example given a Vaul Object of type 'Secret - K8s' with 'File content'
# first_secret=trendy
# another_secret=valued$
# 2 files will be mounted: first_secret and another_secret.
# And their respective content will be `trendy` and `valued$`
- name: mpy-mkrds-secrets # [REQUIRED] must be a name defined in volumesDefinition below
mountPath: "/etc/mpy-mkrds-secrets" # [REQUIRED] where to mount the volume
readOnly: true # [OPTIONAL] default is false
Mount Secrets as files¶
Available from 2.3.16
Secrets with Sub type: ENV File can be mounted as several files (one per line of the ENV File).
Use the volume section of a part:
mpy_meta_config:
parts:
mkrds:
...
volumes:
# For Secrets - you cannot use subPath.
- name: mpy-mkrds-secrets # [REQUIRED] must be a name defined in volumesDefinition below
mountPath: "/etc/mpy-mkrds-secrets" # [REQUIRED] where to mount the volume
readOnly: true # [OPTIONAL] default is false
Each secret will be mounted exactly as one file.
For example given a Vault Object of type 'Secret - K8s' having 'File content':
first_secret=trendy
another_secret=valued$
2 files will be mounted: first_secret and another_secret
And their respective content will be trendy and valued$
Rendering¶
ConfigMaps and Secrets can be rendered like helm 'Values'. Use this if some properties of your configmap are known only at deployment time or must be adapted with the package being deployed.
Example for keycloack.conf
# Keycloak config file
# --- Proxy ---
proxy=edge
proxy-headers=xforwarded
#proxy=passthrough
hostname-strict=false
hostname-strict-https=false
# hostname is known only at deployment time, we use the renderer to generate
# hostname derived on package fqdn
hostname=https://keycloak-@{obj.fqdn_hostname}@
Rendering can be used in all ConfigMaps (ENV file and Config file).
Syntaxt and rendering context is the same helm Values file.
User Manifests¶
Available from 2.3.17
You can now inject custom Kubernetes manifests directly using user_manifests.
This allows you to bypass certain limitations of Helm and test low-level configurations easily.
Read more about User Manifests →
CronJobs¶
Available from 2.3.12
CronJob is a type of part tht allwo you yo setup Kubernetes CronJobs and control them from Muppy Dashboard.
To define a CronJob, you use define a cronjob in your part instead of a deployment as shown below.
mpy_meta_config:
parts: # m2p deploys parts. Each m2p must have at least on part. Generaly a part is linked to a docker image.
{{part_name}}: # a meaningfull name of the part
type: cronjob # [REQUIRED] next 2 lines are required for CronJobs
cronjob: # This defines a CronJob
#suspend: false # [Optional default is true. Can be defined here or in dashboard.
#schedule: "* * * 31 2" # [Optional default is 31 feb. = never Buton Job can be triggered manually ]
#successfulJobsHistoryLimit: 2 # [OPTIONAL] default is 2
#failedJobsHistoryLimit: 5 # [OPTIONAL] default is 5
#concurrencyPolicy: Forbid # [OPTIONAL] default is Forbid
#restartPolicy: OnFailure # [OPTIONAL] default = OnFailure
docker_image: ... # [optional] By default deployment images is defined in profile and retreived in values. Bit this allows to force another image
command: [] # [optional]
args: [] # [optional]
env_vars: # [optional] additional ENV Vars can be defined here. Note that common_env_vars defined in values are always passed to each part.
- name: INOUK_SESSION_STORE
value: "postgresql"
resources: # [optional] Default resources requests and limits must be defined here. Resources defined in dashboard take precedence over these.
requests:
cpu: 0.1
memory: "200M"
limits:
cpu: 0.3
memory: "300M"
# Debug Mode setup
# CronJob allow only debug_mode_available=null | basic
debug_mode_available: null | basic # [optional] it can be completely omitted if null
# Then you must define debug_config
# Finally debug mode is activated using Dashboard, so you must setup a dashbaord to switch debug_mode on / off
# Debug Mode is switched on with $.Values.dashboards.[part].debug_mode
# Example configuration for "basic" debug mode
debug_config:
command: ["sleep"]
args: ["infinity"]
env_vars:
#- name: VARNAME
# value: "a value"
resources: # Here you can set different 'resources' applied in debug_mode
# volumes can be used with cronJobs. Setup is the same as deployment.
volumes:
- name: test-flight
mountPath: /test_flight
#readOnly: true
dashboard: # When dashboard is defined, a dashbaord will be created with code = {{part_name}}
name: CronJob Fancy name # and this name will be used as label
CronJob can alse use startupProbe and volumes. They are defined like deployment base parts.
Working example of cronjob¶
mpy_meta_config:
parts:
test-cron:
cronjob:
#suspend: false # [Optional default is true. Can be defined here or in dashboard.
#schedule: "* * * 31 2" # [Optional default is 31 feb. = never Buton Job can be triggered manually ]
#successfulJobsHistoryLimit: 2 # [OPTIONAL] default is 2
#failedJobsHistoryLimit: 5 # [OPTIONAL] default is 5
#concurrencyPolicy: Forbid # [OPTIONAL] default is Forbid
#restartPolicy: OnFailure # [OPTIONAL] default = OnFailure
# Docker Image defined here has precedence over values.docker_image
docker_image: kennethreitz/httpbin
command: ['sleep'] # 'command' is required for cron (in dashbaord or here)
args: ["10"] # Optional default is defined by entry_point
env_vars:
resources: # default resources that are overloaded by dashboard
requests:
cpu: 200m
memory: 200M
limits:
cpu: 400m
memory: 400M
volumes:
- name: test-flight
mountPath: /test_flight
#readOnly: true
debug_mode_available: basic
debug_config:
command: [sleep]
args: [infinity]
env_vars:
resources: # Here you can set different 'resources' applied in debug_mode
dashboard:
name: "The Cron must go on"
volumesDefinition:
- name: test-flight
description: axelor main volume
claim:
storageClassName: microk8s-hostpath # [optional] default is values.default_storage_class
#accessModes: [ReadWriteOnce] # [optional] default is ReadWriteOnce
size: 2Gi
Dashboard for CronJobs¶
Here is an example of valid CronJob data generated by muppy for the dashboard above.
dashboards:
test-cron:
suspend: true
command: ["sleep"]
args: ["7"]
schedule: "*/1 * * * *"
resources:
requests:
cpu: 400m
memory: 400M
limits:
cpu: 800m
memory: 800M
Troubleshootings¶
Traefik log errors reporting IPAllowList middleware is not found.¶
This is a known problem: https://community.traefik.io/t/traefik-reporting-middleware-not-found-but-it-works/21424
You can use Traefik Dashboard to check the middleware is correctly loaded.
Dashboards¶
Dashboard are setup via 3 components:
Parts Config.controls Dashboard setup debug mode availabilityPackage Profileallows to define dashboard controls default values (eg. resources)- Package release's
Scalingtab is the dashboard GUI
### Overview
Muppy: - analyzes mpy_meta_config (either in Parts Config or values) for mpy-metapackage sub-packages. - loads default values from Package Profile
User controls packages using GUI in the Scaling tab of Package release.