Release v2.1.1
- Added support for TLS Ingress creation without specifying the certificate [#16021]
- Clusters created through Rancher can sometimes get stuck in provisioning [] [#15969] []
- An issue with Flannel can break pods’ cross-host communication [#13644]
- Flannel does not work with Kubernetes 1.12 (EXPERIMENTAL) clusters []
- Fixed an issue with incorrect caching on client side resulting in older UI version being used for upgraded Rancher setups [16041]
- Fixed an issue where logging in as Azure AD user with lots of groups was slow []
- Fixed an issue where large LDAP Database caused Rancher LDAP query timeouts [15950]
- Fixed an issue where charts from helm-stable repository having parameter set, failed to deploy []
- Fixed an issue where panic was observed in Alert controller [16001], []
- Fixed an issue where server-url could not be updated via UI [16140]
- Fixed an issue where annotations were not allowed to be set via UI on Ingress create []
- Fixed an issue where upgrading a Rancher HA installation when there is only 1 or 2 nodes with the “worker” role will result in the upgrade failing because the new default scale for Rancher server replicas is 3 [#16068].
- Fixed an issue where Rancher server upgrade was broken for setups having SSL termination configured on External Load Balancer []
- Fixed an issue where Rancher server upgrade was broken for installs having
privateCA
set to [16053] - Fixed an issue where an upgrade was failing for a single node Rancher cluster []
- rancher/rancher:v2.1.1
- rancher/rancher-agent:v2.1.1
- cli - v2.0.5
- rke -
Due to the HA improvements introduced in the v2.1.0 release, the Rancher helm chart is the only supported method for installing or upgrading Rancher. Please use the Rancher helm chart to install HA Rancher. For details, see the HA Install - Installation Outline.
Any upgrade from a version prior to v2.0.3, when scaling up workloads, new pods will be created [] - In order to update scheduling rules for workloads [#13527], a new field was added to all workloads on update
, which will cause any pods in workloads from previous versions to re-create.
Note: If you had the helm stable catalog enabled in v2.0.0, we’ve updated the catalog to start pointing directly to the Kubernetes helm repo instead of an internal repo. Please delete the custom catalog that is now showing up and re-enable the helm stable. []