Please enable JavaScript to view this site.

Version:

Navigation: Release Notes

Upgrade to v1.7

Prev Top Next More

This chapter outlines the configuration changes introduced in version 1.7 and provides step-by-step guide instructions for upgrading from version 1.6.

Breaking Changes

Starting with Networks Locator v1.7, several Docker containers and Helm charts previously based on Bitnami Open Source Components have been replaced with their official community counterparts due to changes in Bitnami’s licensing policy.

Affected Containers:

MongoDB

PostgreSQL

RabbitMQ

Keycloak

Impact:

RabbitMQ and Keycloak: No additional actions required.

PostgreSQL and MongoDB: Manual migration is required because their Persistent Volume (PV) structures are incompatible with the new community images.

Please note: This version additionally requires a manual update to the date stored in the MongoDB. The primary user identifier has been changed from the email address to the Keycloak user ID to improve consistency and security.

These migration steps are required only once. Future releases will not require this process.

Pre-Upgrade Steps

Dump Existing Databases

This step must be completed before the update.

PostgreSQL (Network Locator User Management database):

The following command requires the password for the keycloak user to be provided via stdin:

# doc: user could be different
kubectl exec -it -n <namespace> network-locator-keycloak-postgresql-0 -- pg_dump -U keycloak bitnami_keycloak > dump.sql

The dump.sql file will be created on your local machine.

MongoDB:

kubectl exec -it -n <namespace> network-locator-mongodb-0 -- mongodump --db process-manager-db --out /tmp/mongodump
kubectl exec -it -n <namespace> network-locator-mongodb-0 -- mongodump --db config-db --out /tmp/mongodump
kubectl exec -it -n <namespace> network-locator-mongodb-0 -- mongodump --db storage-db --out /tmp/mongodump
kubectl cp -n <namespace> network-locator-mongodb-0:/tmp/mongodump ./mongodump

Update HELM values file

Since Keycloak is no longer provided by the Bitnami chart, the structure of the settings in the values.yaml file has changed.

Please review and compare your current values.yaml file with the updated structure described in values.yaml

Step by Step Upgrade Instructions

1.Back up your Process Manager, Storage, and Sync Service settings by saving the JSON config from the Admin Client to a local file—just in case.

2.Make sure you have a valid connection to your kubernetes cluster

3.Uninstall the Existing HELM Deployment: Run the following command to uninstall the current deployment:

helm uninstall <RELEASENAME> --namespace <NAMESPACE>

No data loss will occur during uninstallation. The Locator uses persistent volumes to store essential data, which will remain intact and be reused by the new version.

4.Decide whether you want to manually deploy the solution package or whether you want to HELM deployment to take care of it

a.via HELM: add this to your values.yaml directly under the global property:

 solutionDeployment:
   portalUrl: "{{ tpl .Values.global.arcgis.enterpriseUrl . }}/portal"
   portalUsername: <PORTAL-USERNAME>
   portalPassword: <PORTAL-PASSWORD>
   webViewerUrl: <STUDIO-WEB-VIEWER-URL>
   workflowDesignerUrl: <STUDIO-WORKFLOW-DESIGNER-URL>
   webMapUrl: <WEBMAP-URL>
   webViewerAccountId: <STUDIO-WEB-VIEWER-ACCOUNTID>
   # https://<KUBERNETES-HOST>/api-network-locator-gateway
   locatorUrl: <LOCATOR-URL>
   # https://<KUBERNETES-HOST>/api-network-locator-gateway/history/FeatureServer/1
   featureLayerUrl: <FEATURE-LAYER-URL>

Currently, the portalUsername and portalPassword must be hardcoded in the values.yaml file. Starting with the next release, these credentials will be securely read from a Kubernetes secret instead.

b.manually: deploy the solution package and add the appid for client and cockpit to your values.yaml file

In this case make sure you set the property global.solutionDeployment.enableDeployment to false

5.Login to the VertiGIS Container Registry

a.helm registry login vertigisapps.azurecr.io

6.Deploy version 1.7 using HELM.

helm install <RELEASENAME> oci://vertigisapps.azurecr.io/network-locator/helm-chart \
   --namespace <NAMESPACE> \
   -f values.yaml \
   --wait \
    --version 1.7.0 \
    --timeout 30m0s \

Once the Helm Install started you can directly continue with restoring the database described in the next step. You do not need to wait for the Helm Install to complete

Restore databases

Temporarily disable the connection between Keycloak and PostgreSQL

kubectl scale deployment -n <NAMESPACE> network-locator-keycloak --replicas=0

Keycloak

Drop and recreate the database, then restore from the dump:

# doc: user could be different
kubectl exec -i -n <namespace> network-locator-keycloak-postgres-statefulset-0 -- psql -U keycloak -d postgres -c "DROP DATABASE IF EXISTS keycloak_db;"
kubectl exec -i -n <namespace> network-locator-keycloak-postgres-statefulset-0 -- createdb -U keycloak keycloak_db
kubectl exec -i -n <namespace> network-locator-keycloak-postgres-statefulset-0 -- psql -U keycloak keycloak_db < dump.sql

Finally, re-enable the connection between Keycloak and PostgreSQL

# doc: namespace, name of deployment
kubectl scale deployment -n <NAMESPACE> network-locator-keycloak --replicas=1

MongoDB:

Copy mongodb dump and restore the DBs

kubectl cp ./mongodump <namespace>/network-locator-mongodb-statefulset-0:/tmp/mongodump
kubectl exec -it -n <namespace> network-locator-mongodb-statefulset-0 -- mongosh --eval "db.getSiblingDB('process-manager-db').dropDatabase()"
kubectl exec -it -n <namespace> network-locator-mongodb-statefulset-0 -- mongorestore --db process-manager-db /tmp/mongodump/process-manager-db
kubectl exec -it -n <namespace> network-locator-mongodb-statefulset-0 -- mongosh --eval "db.getSiblingDB('config-db').dropDatabase()"
kubectl exec -it -n <namespace> network-locator-mongodb-statefulset-0 -- mongorestore --db config-db /tmp/mongodump/config-db
kubectl exec -it -n <namespace> network-locator-mongodb-statefulset-0 -- mongosh --eval "db.getSiblingDB('storage-db').dropDatabase()"
kubectl exec -it -n <namespace> network-locator-mongodb-statefulset-0 -- mongorestore --db storage-db /tmp/mongodump/storage-db

If restore fails, drop the corresponding database and retry.

Delete old PVCs

If you are sure that everything has worked out correctly and no data loss occured you can delete to old PVCs

data-network-locator-keycloak-postgresql-0

data-network-locator-rabbitmq-0

datadir-network-locator-mongodb-0

Update User Identification

In previous releases of Networks Locator the link between a request in the database and a user was made through the E-Mail Address. This was not optimal since E-Mail Addresses might change over time. Starting with Locator 1.7 we now use the unique Keycloak user id as the identifier.

If you start using Networks Locator from scratch you don't have to worry about anything and can ignore the following statement.

If you already have locate requests in your DB and want to ensure that these requests are still assigned to the correct user you'll need to modify the underlying database.

A sample script is provided here:

kubectl exec -n <NAMESPACE> -it network-locator-mongodb-statefulset-0 -- mongosh

MongoDB Script

Adapt the mapping of E-Mail and User ID before you use the script

const emailToUuid = {
  "max.mustermann@vertigis.com": "59ab8c4b-b02c-4fac-ab1e-94d77ed8836f",
  "peter.lustig@vertigis.com": "59ab8c4b-b02c-4fac-ab1e-asdasdwwda",
};
 
use ("process-manager-db");
db.request_data.find().forEach(doc => {
const newUuid = emailToUuid[doc.userId];
if (!newUuid) {
   print(`No UUID found for userId = ${doc.userId}`);
  return;
 }
 
 db.request_data.updateOne(
   { _id: doc._id },
   { $set: { userId: newUuid } }
 );
});
 
use ("storage-db");
db.file_entries.distinct("ownerId");
db.file_entries.find().forEach(doc => {
const newUuid = emailToUuid[doc.ownerId];
if (!newUuid) {
   print(`No UUID found for ownerId = ${doc.ownerId}`);
  return;
 }
 
 db.file_entries.updateOne(
   { _id: doc._id },
   { $set: { ownerId: newUuid } }
 );
});

Configuration Changes

Consider which of the new Features you want to use and configure them according to your needs:

1.DXF-Export

2.Return Distinct Feature in Layer Intersections

3.Set always-print flag for an intersection group to always print it independent of any feature in the request area

4.You can now customize the order of print templates displayed in the Networks Locator Client dropdown.

5.You can now add a bitmap image to a dynamically generated PDF documents.

6.You can now use the parameters nodeSelector, tolerations and affinity as gobal parameters in the HELM values file. They will be honored by all locator subcharts with the exception of Camunda and the Kubernetes Dashboard which have their individual concepts. See https://github.com/kubernetes/dashboard/blob/master/charts/kubernetes-dashboard/values.yaml and https://artifacthub.io/packages/helm/camunda/camunda-platform/10.5.0#parameters

7.You can now define that the Keycloak login page will be loaded outside of the Networks Locator iFrame.

a.The Locator-Client and Locator-Cockpit Container now support the additional environment LOGIN_PAGE_TARGET which controls how the keycloak login page should be opened. Possible values are self, blank or popup. If nothing is set the default behavior will be unchanged which means the login page will be loaded inside the iFrame

© 2025 VertiGIS North America Ltd. All Rights Reserved. | Privacy Center | Imprint
Documentation Version 1.7 (fb2abb08)