We created this site to hear your enhancement ideas, suggestions and feedback about AVEVA products and services. All of the feedback you share here is monitored and reviewed by the AVEVA product managers.
To start, select the product of your interest in the left column. Then take a look at the ideas in the list below and VOTE for your favorite ideas submitted by other users. POST your own idea if it hasn’t been suggested yet. Include COMMENTS and share relevant business case details that will help our product team get more information on the suggestion. Please note that your ideas will first be moderated before they are made visible to other users of this portal.
This page is for feedback for specific AVEVA solutions, excluding PI Systems and Data Hub. For links to these other feedback portals, please see the tab RESOURCES below.
The current PI-OASyS interface was designed for the 7.4 version of OASyS. This interface will remain supported for all versions it was released with. It uses OSIsoft APIs/SDKs which are practically EOL, and are certainly no longer evolving.
The use-cases for the PI interface have since evolved:
It was originally thought to replace MSSQL in the control room. This didn't take hold, as the real value prop of PI is more than just a SCADA historian.
It is now almost never run in the production zone. Rarely in the DSS, and most often only in the corporate zone
Some customers have installed an additional PI server to collect directly from either PROD or DSS Realtime and use PI2PI replication to get the data into a larger corporate historian
Customers have started incorporating PI into the Control System workflow, and therefore data has started coming back from PI into SCADA
OPC-UA and MQTT are options for this that already exist.
Authentication options have changed.
Windows Integrated Security with trusts (no longer an industry accepted design)
Windows Integrated Security without trusts, and manually synced shadow accounts (not a great admin experience)
PI Trusts are EOL
New security integration with DataHub / AVEVA connect
Instead, PI/DataHub has become a foundational element for AVEA taking feeds from all sorts of the systems, one of which is SCADA.
Meanwhile, the SCADA platform has evolved:
ability to put any field on exception collect Aha Feature: OASYS-145 will increase the amount of data being sent to PI
Memory management improvements being done by arch team will improve PubSub performance, but perhaps not enough to be the best choice as a data source moving forward
High Performance Data Export Framework introduced a new low-level interface to data that has changed, and might be an option for the new interface
With the acquisition of OSIsoft, AVEVA DataHub is quickly becoming the de facto standard for cloud data offering. Their new Adapter Framework gives a double return on our investment as it can simultaneously deliver data to an on-prem PI and DataHub, from a single interface instance running in our DMZ.
[MVP] Authentication/User Accounts
The account/privs that the interface runs as should be different than that which is permitted to configure the interface, and should follow our best practices
The interface will run from a dedicated system account. Configuration can be performed by a user with the appropriate priviledges as appropriate for our product's security best practices.
PI-trusts are EOL and will not be supported.
Assume all on-prem PI servers are in a different windows domain.
AVEVA connect will form part of the authentication of system to system connections to DataHub.
[Phase 2] Allow-listing for points being sent to PI / DataHub
Andrea/Kevin have a case in LATAM (PMFR-482) where the PI server is owned and operated by a third party, and the SCADA operator wants to limit which points are sent there.
The legacy interface is controlled by PI, in that if PI has the tagname, it can obtain the data from SCADA and there is no way to control that from the SCADA side. For backwards compatibility this functionality is desireble, as long as the SCADA admin has the final say as to which points go where.
Multiple PI destinations with different point lists
Customers want to send different data to multiple on-prem PI servers
Some customers will have PI on-prem, and a DataHub instance. We should assume that most points will go to both, but should be able to handle a different point list for diferent destinations.
Outage, buffering
A request came in for buffering data during an outage to the PI/ADH server (PMFR-498). It is assumed that the Adapter Framework has provisions for this.
[MVP if provided by framework]
[Future] Backfill
Desirable: ability to backfill any data that was not buffered. Not sure how this will be detected, assume the PI admin, or a user of ADH will notice the gap, and then request it to be filled via the SCADA administrator. If the system detects that buffered data was not sent (timed out, buffer grew too big) perhaps it could record enough information to trigger its own fulfillment when the connection is restored.
[MVP] PI Compression
It has been reported (OGPM-979) that our legacy interface resulted in diagonal interpretation lines being shown when plotting data that came from OASyS. At first glance this seems to be out of scope for our interface, but it appears that exception processing may occur within our interface. Our new interface should properly send the exception data as needed for PI visualisation tools to properly represent step-changes in the data
[MVP] Timestamp Fidelity
Data collected in our SQL Historian, and data sent to PI, should have the same shape when plotted. In particular, the timestamps of the inflection points in the curve should match.
The attached document from Collin Roth details the current shortcomings of the legacy interface due mostly to the evolution of the design from a direct connection in PROD, to one that involves replication through the DSS, and the various latencies and time-smear that ensues. There is a workaround that projects use regularly that involves replicating the lastModTime field from PROD to DSS, which by default is not replicated to save bandwidth. By replicating this, the PI interface on the DSS triggers more frequently sending more updates to PI.
In our new ineterface, we should always strive to send the data to PI with the most accurate timestamp. For sample collect, this is when the same was taken. For exception collect, it is when we received the value from the field. For field collected data, this should be the field timestamp.
[Phase 2] Data Overrange
Current PI interface was sending the word OVERRANGE instead of the actual value when the point was in INST_FAIL (and possibly other conditions).
The new interface shall have an option - system wide - to send the actual value instead.
Data Quality Mapping
A few customers use a second PI tag to store the data quality of the data PI tag. This will not be the norm when Enterpride Licensing is removed, which gave customers access to infinite PI tags.
Some customers use all the available PI "digital states" while some only use a few. The legacy interface only supports a few.
[MVP] the new interface will support the same DQ mapping as the legacy interface
[Phase 2] The administrator can configure system-wide a mapping between our Extended Dataquality (including custom DQ flags added by the integrator) to the available PI digital states on their PI server / ADH.
[Future - depending on current usage] Administrators can use an SDK to create custom DQ mappings to PI digital states.
[MVP] All tables with simple QVT data can be sent to PI
The also includes tables like remote, station, device, and remconjoin (OGPM-2289 details how SWG added a "remote+connection" field to remconnjoin to give them a "name" to publish to PI)
Address the reported issue with the legacy interface truncating PubSub topic strings, leading to them not being sent to PI: OASYS-I-17
PI ICU interface comes with hundreds of health tags.
The ICU tool apparantly is used alot by PI administrators, and the legacy interface was not sending the appropriate information to be useful
[Stretch for MVP]
How do we send non-numerical fields for a record to PI? Things like flag.msgtxt strings, and configuration fields like alarm limits
[MVP] Data should be delivered in chronological order as much as possible.
This is to ensure the highest performance on the PI server when data is read. PI can accept out of order data, but the recommendation is to limit this as much as possible.
This is listed here for consideration if a novel architecture is being considered. Especially if it is more asynchronous than the current legacy inteface.
[MVP] No duplicate data
Interface to refrain from sending "duplicate data" during transitions from "Bad quality" to "Good quality", so that logs are not needlessly filled with messages indicating a the received data is duplicated. OGPM-3036 and TFS# 275342 (note: PrdM has lost access to TFS and could't review this ticket)
In Multi-master polling (See "topologies" below) no duplicate data sent to PI
[MVP] Mode-switch aware
For each topology below, the interface should send only the data appropriate for the sending system. No manual intervention required as is currently required at Enbridge (see: "ENB Current")
One such topology is split-realtime - http://tcjira.aveva.com/browse/PMFR-405
[MVP] Data source methods
By Exception
By sample
SCADA point selection
Some customers prefer to simply have SCADA send all points that are already in PI. That is, the interface received a list of points from PI that are configured with SCADA as their point source.
Some customers to not give point-create priviledges to their SCADA admins. Instead, they create blocks of dummy PI points that can then be edited by the SCADA admin. The SCADA admin then picks one of these dummy points, and reconfigures it to receive data from SCADA, edits the name etc.
Some customers want SCADA to automatically start sending data to PI for a point if so configured. These customers have PI admins that regularly review newly created points to ensure they are following the proper conventions. In this situation, the PI tag name my be edited after it was created by our interface, so our interface will need some other indelible way to continue to identify the point after it was renamed by the PI admin.
[MVP] Tag Name Aliases
The adapter must allow for an aliases to be configured for a point, so that the PI administrators can use different naming conventions than the SCADA administrator
This may not be an actual requirement for us, unless it is impacted by our design or choice of SDK/Framework.
Other Non-Functional Requirements
will use HPDEF framework
Will use the Adapter Framework
will be independantly releasable
Will support ES23, ES24 and ES25
Size: Small, to XXL
Terminology:
Single Master Realtime = One Hot/Standby Realtime pair at the Primary Control Center (PCC), polling all datasets. Mirrored at the Backup Control Center (BCC).
Multi-Master Realtime = Hot/Standby pairs at the PCC and BCC, polling the datasets they own.
Split Realtime = More than one Hot/Standby pair at one control center, polling the datasets they own. Mirrored at the Backup Control Center (BCC).
Store and Forward = Master / sub-master arrangement where Hot/Standby is replicating up to a PCC, and from there to the DSS.
DSS = single or dual hosted Realtime in a DMZ, different domain, purdue compliant
DataHub = Cloud hosted PI
On Prem PI = PI Server or Collective
Topologies:
[MVP] Station or Single Site:
Single Master Realtime, no DSS, no Backup site, DataHub + On Prem PI in corporate domain and network zone
[MVP] Simple:
Single Master Realtime, DSS, On Prem PI in Corporate domain and network zone
Single Master Realtime, DSS, DataHub
Single Master Realtime, DSS, DataHub + On Prem PI in corporate domain and network zone
[Phase 2] Multi-master:
Multi-Master Realtime, DSS at only PCC
Multi-Master Realtime, DSS at both PCC & BCC
[Phase 2] Split Realtime:
Split Realtime, DSS at both PCC & BCC (This is the ENB configuration)
[MVP unless it’s difficult, in which case ‘Future’] Store and Forward:
Store and Forward, DSS at PCC.
No data coming back to SCADA - that can be achieved today with OPC or MQTT.
No PI-AF requirements - this will come in later iterations of the interface
No User Interface to administer points in Pi - users will use existing PI software.
GMAS to PI will remain a project deliverable. for now, but a program brief is being drafted to address this in the future
No automatic backfill after the conclusion of an outage.
No testing requested for data diodes. The Waterfall brand data-diode claims support for Adapter Framework, so we should be covered.
Will use HPDEF
Will use Adpater Framework
PI Server
PI Adapter Framework
PI Authentication Methods
AVEVA Connect (for access to DataHub)
Customers using the new interface will most likely have an existing PI server, getting data from the existing interface.
The configuration is expected to be different, so a translation/migration/mapping from old to new configuration styles would be desirable.
The performance of the new interface is expected to be different, so a description of these differences will help forstall questions from customers. Things like the data fidelity, dataqualities available etc.
Customers using the old interface will expect the new interface to provide the same data, except for the improvements that we make in this new interface. Some way to test and compare old and new data streams is desireable.
The request is for this interface to be independantly releasable, which implies that it will be possible to deploy the new connector in a live-update scenario, which means no loss of control of the pipeline, and a minimal reduction in the level of redundancy, and a seamless experience that does not cause data-loss on the target PI server, however a temporary interruption to data-flow is acceptable.
Expectation
A new PI/ADH interface that can be independently upgraded as it evolves. Needs to integrate with AVEVA connect as far as identiy management into ADH. New interface will be based on current technology. MVP will be like to like for just sending data to PI and OnPrem Phase 2 will be support for more topologies, DQ mapping and allow listing Future enhancements will be things like more customisations for DQ mapping, administrator experience and anything that the hybrid-cloud products require, like data model interchange. |
|
Idea business value
The PI historian is widely considered the System of Record for our customers. The current interface has limitations and other issues that need to be addressed in order for us to claim the highest reliability and fidelity of data into the System of Record. Integration with DataHub is top top priority, as it enables way more than just getting data into a historian. Pretty much the entire "Intelligent Midstream" initiative depends on this integration. As such, it is expected that this interface will iterate as other portfolio integrations mature, for example the sending, receiving, translating, and using of data models between SCADA and other portfolio products. PI-AF will be the next item on this list, and will expand to include data model / asset library mapping and conversions. The request for indepedant releasability will require large amounts of non-functional work in DevOps, System Test and potentially TechPubs ocumentation. |
|
Idea Type | Improvement |
Idea priority | 5 – Critical to my company |
Work in | OASYS-E-9 Mega Crust [M1] - Analog point value can be selected, configured and sent to PI Server |
Work status | Ready to ship |