This chapter outlines the configuration changes introduced in version 1.8 and provides step-by-step guide instructions for upgrading from version 1.7.
This release introduces full-text search capabilities including a new spatial filter powered by Apache Solr Spatial, now accessible directly via the Orderbook. To support this infrastructure, three new Helm charts have been added (vnl-solr-spatial, vnl-solr-connector, and vnl-zookeeper) which will deploy the necessary Apache Solr and Apache Zookeeper pods. No mandatory updates to your values.yaml file are required.
To ensure that requests created with previous versions are included in search results, you must perform a one-time manual population of the Solr index. We provide a Kubernetes CronJob (vnl-solr-spatial-reindex) that can be manually triggered to perform this task.
While no configuration is required, you can optionally override the following new global Helm values to adjust replica counts for your specific environment:
global:
solr:
spatial:
replicaCount: 3
zookeeper:
replicaCount: 3
Note that solr-spatial and zookeeper are stateful services; as such, each replica will provision its own dedicated Persistent Volume Claim (PVC).
1.Back up your Process Manager, Storage, and Sync Service settings by saving the JSON config from the Admin Client to a local file—just in case.
2.Make sure you have a valid connection to your kubernetes cluster
3.Uninstall the Existing HELM Deployment: Run the following command to uninstall the current deployment:
helm uninstall <RELEASENAME> --namespace <NAMESPACE>
No data loss will occur during uninstalling. The Locator uses persistent volumes to store essential data, which will remain intact and be reused by the new version.
4.Decide whether you want to manually deploy the solution package or whether you want to HELM deployment to take care of it
a.via HELM: add this to your values.yaml directly under the global property:
solutionDeployment:
portalUrl: "{{ tpl .Values.global.arcgis.enterpriseUrl . }}/portal"
portalUsername: <PORTAL-USERNAME>
portalPassword: <PORTAL-PASSWORD>
webViewerUrl: <STUDIO-WEB-VIEWER-URL>
workflowDesignerUrl: <STUDIO-WORKFLOW-DESIGNER-URL>
webMapUrl: <WEBMAP-URL>
webViewerAccountId: <STUDIO-WEB-VIEWER-ACCOUNTID>
# https://<KUBERNETES-HOST>/api-network-locator-gateway
locatorUrl: <LOCATOR-URL>
# https://<KUBERNETES-HOST>/api-network-locator-gateway/history/FeatureServer/1
featureLayerUrl: <FEATURE-LAYER-URL>
Currently, the portalUsername and portalPassword must be hardcoded in the values.yaml file. Starting with the next release, these credentials will be securely read from a Kubernetes secret instead.
b.manually: deploy the solution package and add the appid for client and cockpit to your values.yaml file
In this case make sure you set the property global.solutionDeployment.enableDeployment to false
5.Login to the VertiGIS Container Registry
a.helm registry login vertigisapps.azurecr.io
6.Deploy version 1.8 using HELM.
helm install <RELEASENAME> oci://vertigisapps.azurecr.io/network-locator/helm-chart \
--namespace <NAMESPACE> \
-f values.yaml \
--wait \
--version 1.8.0 \
--timeout 30m0s \
Once the Helm Install started you can directly continue with restoring the database described in the next step. You do not need to wait for the Helm Install to complete
Consider which of the new Features you want to use and configure them according to your needs:
It is now possible to reference the start and end dates of locate requests:
{#if request.start_date??}
<fo:block font-size="11pt" space-before="11pt" text-align="justify">
Startdatum: {time:format(request.start_date,'dd.MM.yyyy',global:locale,global:zoneId)}
</fo:block>
{/if}
{#if request.end_date??}
<fo:block font-size="11pt" space-before="11pt" text-align="justify">
Endedatum: {time:format(request.end_date,'dd.MM.yyyy',global:locale,global:zoneId)}
</fo:block>
{/if}
There have been no changes to the workflows in this release