Skip to main content
Library of WebID filing cabinets with encoded identifiers
How-to Guide

WebID & Lookup

Every resource in PI Web API is identified by a WebID -- an opaque, URL-safe string that encodes the resource type, server, and identity. This guide explains the WebID format, all lookup methods (path, search, name filter), AF vs PI point differences, caching strategies, and bulk resolution patterns.

What is a WebID?

A WebID is a URL-safe, base64-encoded identifier assigned to every PI resource: points, elements, attributes, servers, databases, and more. You need a resource's WebID to read or write its data via the streaming endpoints.

WebIDs are stable across requests but may change if the underlying object is deleted and recreated, or if the PI Data Archive is migrated. Always look up WebIDs by path or search rather than hardcoding them.

example_webid.txttext
# A typical PI point WebID looks like this:
P1DP9We5Y6kuE07EDQR39tVQ2wQQAAAAUElTUlYwMlxTSU5VU09JRA

# Breaking it down:
#   P1  = WebID type prefix (PI Point, encoding type 1)
#   DP9We5Y6kuE07EDQR39tVQ2w = encoded server GUID
#   QQAAAAUBJUVJ...           = encoded point identity

WebID type prefixes

The first two characters of a WebID tell you what type of resource it represents and which encoding was used. This is essential for debugging -- if you have a WebID, you can immediately tell what kind of object it refers to.

PrefixResource typeExample context
P1PI Point (by GUID)Individual tag on a PI Data Archive
I1PI Point (by ID)Point referenced by integer point ID
F1AF ElementElement in an AF hierarchy
AbAF AttributeAttribute on an AF element
RDAF DatabaseTop-level AF database
S1PI Data ServerPI Data Archive instance
RSAF ServerPI AF Server instance
FmAF Event FrameEvent frame in AF
ETAF Element TemplateTemplate for creating elements

Quick debugging trick

If a batch request fails with 404 on a WebID, check the prefix. A P1 prefix means it's a PI point -- pass it to /points/{webId}. An F1 prefix means it's an AF element -- pass it to /elements/{webId}. Using the wrong endpoint for the resource type gives a 404 even if the WebID is valid.

AF elements vs PI points

Understanding the difference between AF elements and PI points is fundamental to working with WebIDs correctly.

ConceptPI PointAF Element
What it isA single time-series data tag on a PI Data ArchiveA logical object in an AF hierarchy (e.g., a pump, reactor, building)
Has values?Yes, directlyNo -- its attributes have values (which may reference PI points)
Read data via/streams/{pointWebId}/value/streams/{attributeWebId}/value
Lookup endpoint/points?path=\\\\SERVER\\tagname/elements?path=\\\\AF\\DB\\Element
WebID prefixP1F1 (element), Ab (attribute)

Common mistake: reading values from an AF element WebID

You cannot read values directly from an AF element (F1 prefix). You must first get its attributes, then read values from the attribute WebIDs (Ab prefix). Passing an element WebID to /streams/{webId}/value returns a 400 Bad Request.

Look up by path (recommended)

Path-based lookup is the most reliable method. It does not require PI Indexed Search to be configured and works on every PI Web API installation.

PI point by path

point_by_path.pypython
# Look up a PI point by its full path
response = session.get(
    f"{BASE_URL}/points",
    params={
        "path": "\\\\YOUR-SERVER\\sinusoid",
        "selectedFields": "WebId;Name;PointType;PointClass;Descriptor",
    },
)
response.raise_for_status()

point = response.json()
print(f"Name:       {point['Name']}")
print(f"WebID:      {point['WebId']}")
print(f"Type:       {point['PointType']}")
print(f"Class:      {point['PointClass']}")
print(f"Descriptor: {point.get('Descriptor', '')}")

Expected response:

point_response.jsonjson
{
  "WebId": "P1DP9We5Y6kuE07EDQR39tVQ2wQQ...",
  "Name": "sinusoid",
  "PointType": "Float32",
  "PointClass": "classic",
  "Descriptor": "12 hour sine wave"
}

AF element by path

element_by_path.pypython
# Look up an AF element by its hierarchy path
response = session.get(
    f"{BASE_URL}/elements",
    params={
        "path": "\\\\YOUR-AF-SERVER\\Database\\Plant1\\Reactor01",
        "selectedFields": "WebId;Name;TemplateName;HasChildren;Links",
    },
)
response.raise_for_status()

element = response.json()
print(f"Name:     {element['Name']}")
print(f"WebID:    {element['WebId']}")
print(f"Template: {element.get('TemplateName', 'None')}")

AF attribute by path

attribute_by_path.pypython
# Look up an AF attribute by path (element path + pipe + attribute name)
response = session.get(
    f"{BASE_URL}/attributes",
    params={
        "path": "\\\\YOUR-AF-SERVER\\Database\\Plant1\\Reactor01|Temperature",
        "selectedFields": "WebId;Name;Type;DefaultUnitsName;DataReferencePlugIn",
    },
)
response.raise_for_status()

attribute = response.json()
print(f"Name:       {attribute['Name']}")
print(f"WebID:      {attribute['WebId']}")
print(f"Value type: {attribute['Type']}")
print(f"Units:      {attribute.get('DefaultUnitsName', 'None')}")
print(f"Data ref:   {attribute.get('DataReferencePlugIn', 'None')}")

# Now you can read values using this attribute's WebID
values = session.get(
    f"{BASE_URL}/streams/{attribute['WebId']}/value",
    params={"selectedFields": "Timestamp;Value;Good"},
)
print(f"Current value: {values.json()['Value']}")

Path separator reference

PI paths use backslashes (\\). In Python strings, escape them as \\\\ or use raw strings: r"\\\\SERVER\\sinusoid". AF paths separate the element path from the attribute name with a pipe character (|). Nested attributes use additional pipes: Element|Attribute|SubAttribute.

Look up by search

The search endpoint uses PI Indexed Search (must be enabled on the server). It supports rich query syntax with field-specific filters.

Indexed Search must be configured

The /search/query endpoint requires PI Indexed Search to be installed and configured with crawling enabled on the PI Web API server. If Indexed Search is not available, you will get a 502 Bad Gateway or empty results. Use path-based lookup as a more reliable alternative.

search_basic.pypython
# Basic name search
response = session.get(
    f"{BASE_URL}/search/query",
    params={
        "q": "name:sinusoid",
        "count": 10,
        "selectedFields": "Items.Name;Items.WebId;Items.ItemType;Items.Links",
    },
)
response.raise_for_status()

data = response.json()
print(f"Total hits: {data.get('TotalHits', 'unknown')}")
for item in data["Items"]:
    print(f"  {item['Name']} ({item['ItemType']}) - {item['WebId']}")

Search query syntax

QueryWhat it finds
name:temperature*Points/attributes whose name starts with "temperature"
name:*flow* AND name:*rate*Names containing both "flow" and "rate"
description:"reactor temperature"Resources with exact phrase in description
afCategory:"Process Data"AF elements/attributes in a specific category
attributeName:TemperatureAF elements that have an attribute named "Temperature"
name:sin* NOT name:sinusoiduNames starting with "sin" but excluding "sinusoidu"
search_advanced.pypython
# Scoped search: only points on a specific server
response = session.get(
    f"{BASE_URL}/search/query",
    params={
        "q": "name:*temperature*",
        "count": 50,
        "scope": "pi:YOUR-DATA-SERVER",  # Limit to specific PI Data Archive
        "selectedFields": "Items.Name;Items.WebId;Items.ItemType",
    },
)

# AF-scoped search: only within a specific AF database
response = session.get(
    f"{BASE_URL}/search/query",
    params={
        "q": "name:*reactor* AND afCategory:Equipment",
        "count": 50,
        "scope": "af:\\\\YOUR-AF-SERVER\\YourDatabase",
        "selectedFields": "Items.Name;Items.WebId;Items.ItemType",
    },
)

Look up by name filter

When you know the PI Data Archive server but not the exact point name, use the nameFilter parameter on the data server points endpoint. This does not require Indexed Search.

name_filter.pypython
# First, get the data server WebID
servers_resp = session.get(
    f"{BASE_URL}/dataservers",
    params={"selectedFields": "Items.WebId;Items.Name"},
)
servers = servers_resp.json()["Items"]

# Find your server
server = next(s for s in servers if s["Name"] == "YOUR-SERVER")
server_web_id = server["WebId"]

# List points with a name filter (supports wildcards)
response = session.get(
    f"{BASE_URL}/dataservers/{server_web_id}/points",
    params={
        "nameFilter": "reactor_*_temperature",  # Wildcard pattern
        "maxCount": 200,
        "selectedFields": "Items.WebId;Items.Name;Items.PointType;Items.Descriptor",
    },
)
response.raise_for_status()

points = response.json()["Items"]
print(f"Found {len(points)} matching points:")
for point in points:
    print(f"  {point['Name']} ({point['PointType']}) - {point['WebId']}")

AF element hierarchy traversal

AF organizes assets in a tree hierarchy (e.g., Site > Area > Unit > Equipment). You can walk the tree starting from any element.

af_traversal.pypython
def get_element_tree(
    session, base_url: str, element_web_id: str, depth: int = 0, max_depth: int = 3
) -> dict:
    """Recursively traverse the AF element hierarchy.

    Returns a nested dict with element name, WebID, attributes, and children.
    """
    # Get element details
    element = session.get(
        f"{base_url}/elements/{element_web_id}",
        params={"selectedFields": "WebId;Name;TemplateName;HasChildren;Links"},
    ).json()

    node = {
        "name": element["Name"],
        "web_id": element["WebId"],
        "template": element.get("TemplateName", ""),
        "attributes": [],
        "children": [],
    }

    # Get attributes for this element
    attrs_resp = session.get(
        f"{base_url}/elements/{element_web_id}/attributes",
        params={
            "selectedFields": "Items.WebId;Items.Name;Items.Type;Items.DefaultUnitsName",
            "maxCount": 200,
        },
    )
    if attrs_resp.status_code == 200:
        node["attributes"] = [
            {
                "name": a["Name"],
                "web_id": a["WebId"],
                "type": a["Type"],
                "units": a.get("DefaultUnitsName", ""),
            }
            for a in attrs_resp.json().get("Items", [])
        ]

    # Recurse into children if not at max depth
    if element.get("HasChildren") and depth < max_depth:
        children_resp = session.get(
            f"{base_url}/elements/{element_web_id}/elements",
            params={
                "selectedFields": "Items.WebId;Items.Name;Items.HasChildren",
                "maxCount": 500,
            },
        )
        if children_resp.status_code == 200:
            for child in children_resp.json().get("Items", []):
                child_node = get_element_tree(
                    session, base_url, child["WebId"], depth + 1, max_depth
                )
                node["children"].append(child_node)

    return node


# Usage: traverse from a root element
root_web_id = "F1..."  # Your root element WebID
tree = get_element_tree(session, BASE_URL, root_web_id, max_depth=2)

# Print the tree
def print_tree(node, indent=0):
    prefix = "  " * indent
    template = f" [{node['template']}]" if node['template'] else ""
    print(f"{prefix}{node['name']}{template}")
    for attr in node["attributes"]:
        units = f" ({attr['units']})" if attr['units'] else ""
        print(f"{prefix}  |-- {attr['name']}: {attr['type']}{units}")
    for child in node["children"]:
        print_tree(child, indent + 1)

print_tree(tree)

Bulk WebID resolution

When you need to look up many resources at once (e.g., 200 PI points by path), use the batch endpoint instead of making 200 individual lookup calls.

bulk_webid_resolution.pypython
def resolve_webids_batch(
    session, base_url: str, point_paths: list[str]
) -> dict[str, str]:
    """Resolve multiple PI point paths to WebIDs in one batch call.

    Args:
        point_paths: List of full PI point paths
            (e.g., ["\\\\SERVER\\tag1", "\\\\SERVER\\tag2"])

    Returns:
        Dict mapping path -> WebID (only successful lookups)
    """
    batch = {}
    for i, path in enumerate(point_paths):
        batch[f"lookup_{i}"] = {
            "Method": "GET",
            "Resource": (
                f"{base_url}/points"
                f"?path={path}"
                f"&selectedFields=WebId;Name"
            ),
        }

    response = session.post(f"{base_url}/batch", json=batch)
    response.raise_for_status()

    path_to_webid = {}
    results = response.json()
    for i, path in enumerate(point_paths):
        result = results.get(f"lookup_{i}", {})
        if result.get("Status") == 200:
            path_to_webid[path] = result["Content"]["WebId"]
        else:
            print(f"Failed to resolve: {path} (HTTP {result.get('Status')})")

    return path_to_webid


# Usage: resolve 100 point paths in one HTTP call
paths = [
    "\\\\MY-SERVER\\reactor01_temperature",
    "\\\\MY-SERVER\\reactor01_pressure",
    "\\\\MY-SERVER\\reactor01_flow",
    # ... up to hundreds
]

webid_map = resolve_webids_batch(session, BASE_URL, paths)
print(f"Resolved {len(webid_map)} of {len(paths)} paths")

WebID caching strategies

WebID lookups add latency to every operation. Since WebIDs are stable (they only change if the resource is deleted and recreated), you should cache them.

webid_cache.pypython
import json
from pathlib import Path


class WebIdCache:
    """Simple file-backed WebID cache.

    WebIDs are stable identifiers -- they only change if the underlying
    object is deleted and recreated. Caching them avoids repeated lookups.
    """

    def __init__(self, cache_file: str = "webid_cache.json"):
        self.cache_file = Path(cache_file)
        self.cache: dict[str, str] = {}
        self._load()

    def _load(self):
        if self.cache_file.exists():
            self.cache = json.loads(self.cache_file.read_text())

    def _save(self):
        self.cache_file.write_text(json.dumps(self.cache, indent=2))

    def get(self, path: str) -> str | None:
        return self.cache.get(path)

    def set(self, path: str, web_id: str):
        self.cache[path] = web_id
        self._save()

    def resolve(self, session, base_url: str, path: str) -> str:
        """Get WebID from cache, or look up and cache it."""
        cached = self.get(path)
        if cached:
            return cached

        response = session.get(
            f"{base_url}/points",
            params={"path": path, "selectedFields": "WebId"},
        )
        response.raise_for_status()
        web_id = response.json()["WebId"]

        self.set(path, web_id)
        return web_id

    def resolve_many(self, session, base_url: str, paths: list[str]) -> dict[str, str]:
        """Resolve multiple paths, using cache where possible."""
        result = {}
        uncached = []

        for path in paths:
            cached = self.get(path)
            if cached:
                result[path] = cached
            else:
                uncached.append(path)

        if uncached:
            # Batch-resolve uncached paths
            batch = {}
            for i, path in enumerate(uncached):
                batch[f"lookup_{i}"] = {
                    "Method": "GET",
                    "Resource": f"{base_url}/points?path={path}&selectedFields=WebId",
                }

            resp = session.post(f"{base_url}/batch", json=batch)
            resp.raise_for_status()

            for i, path in enumerate(uncached):
                r = resp.json().get(f"lookup_{i}", {})
                if r.get("Status") == 200:
                    web_id = r["Content"]["WebId"]
                    self.set(path, web_id)
                    result[path] = web_id

        return result


# Usage
cache = WebIdCache("my_project_webids.json")

# First call: HTTP lookup + cache
web_id = cache.resolve(session, BASE_URL, "\\\\MY-SERVER\\sinusoid")

# Second call: instant cache hit
web_id = cache.resolve(session, BASE_URL, "\\\\MY-SERVER\\sinusoid")

When to invalidate the cache

Invalidate cached WebIDs when: (1) a lookup using a cached WebID returns 404, (2) points have been deleted and recreated, or (3) a PI Data Archive has been migrated. For most stable environments, the cache can live indefinitely. Add a try/except around reads that clears the cache entry on 404 and retries the lookup.

WebID 2.0 encoding and decoding

WebID 2.0 allows you to construct a WebID from a known path without making an HTTP lookup call. This is an advanced optimization for high-throughput scenarios where you want to eliminate lookup latency entirely. The webIdType query parameter controls which encoding the server uses in responses.

webid2_decode.pypython
import base64
import struct



def decode_webid_type(web_id: str) -> dict:
    """Decode the type information from a WebID string.

    This extracts the resource type and encoding type from the
    first few characters without making any HTTP calls.
    """
    # Known type prefixes
    TYPE_MAP = {
        "P1": "PI Point (GUID)",
        "I1": "PI Point (ID)",
        "F1": "AF Element",
        "Ab": "AF Attribute",
        "RD": "AF Database",
        "S1": "PI Data Server",
        "RS": "AF Server",
        "Fm": "AF Event Frame",
        "ET": "AF Element Template",
    }

    prefix = web_id[:2]
    resource_type = TYPE_MAP.get(prefix, f"Unknown ({prefix})")

    return {
        "prefix": prefix,
        "resource_type": resource_type,
        "full_webid": web_id,
    }


# Useful for debugging: what kind of resource is this WebID?
info = decode_webid_type("P1DP9We5Y6kuE07EDQR39tVQ2wQQ...")
print(f"Type: {info['resource_type']}")  # PI Point (GUID)

info = decode_webid_type("F1EmAbcDeFgHiJkLmNoPqRsTuV...")
print(f"Type: {info['resource_type']}")  # AF Element

# Request WebID 2.0 encoding in API responses
response = session.get(
    f"{BASE_URL}/points",
    params={
        "path": "\\\\MY-SERVER\\sinusoid",
        "webIdType": "PathOnly",  # or "Full", "IDOnly"
        "selectedFields": "WebId;Name",
    },
)
# PathOnly WebIDs encode the server and point path, making them
# reconstructible without a lookup

Lookup method comparison

MethodRequiresBest forLimitations
Path lookupKnow exact pathReliable, works everywhereMust know the full path
Search queryIndexed Search enabledFlexible queries, partial namesRequires server configuration
Name filterKnow the serverWildcard patterns, no Indexed Search neededOnly PI points, not AF resources
Batch resolutionKnow paths (many at once)Bulk lookups, one HTTP callSubject to batch size limits
WebID 2.0 constructionKnow path + server GUIDZero-latency lookupsComplex to implement, fragile

Need help?