Nor is built by the people who use it. To tend this commons well, we look at what parts of the archive get visited, what people search for, and where the site is slow. Nothing is sold, nothing is shared with advertisers.

If you agree, we’ll store analytics cookies. You can change your mind any time. What we store.

Serendipity
Playlists
Try “What are you looking for?”

Nor Collection

Policies &
Principles

How the Nor Collection approaches naming, identity, AI-assisted research, and the obligations that come with maintaining a living archive of people and practice.

Naming

Names change.

This archive records the names under which a practitioner produced work held in this collection. All name records are stored as historical entries for research purposes. No entry is treated as primary or authoritative.

The name displayed on each practitioner page reflects the name they have designated as current, or the most recent known name. Former names appear only as footnotes — never as primary identifiers, and never in search engine-indexed content, export records, or social metadata.

Living practitioners listed in the archive may request changes to how their name history is displayed, including removal of any entry from public view. Contact us at archive@norcollection.ca.

Former names are not included in OpenGraph tags, page metadata, OAI-PMH exports, or PostHog analytics.

Entries marked sensitive are filtered at the data layer — not the display layer.

The merge tool used by administrators writes a record of the merged name with is_public=false, preserving the historical connection without surfacing it publicly.

AI & Automated Research

Our models. Our servers. AI suggests. Humans decide.

The Nor Collection uses AI to support search and metadata quality. All inference runs on self-hosted servers in Toronto. Archive images, metadata, and search queries never leave our servers. The one exception is the MCP server feature: if you choose to connect your own AI assistant to the archive, you are pulling data into your tool — that connection is yours to make, and yours to control. The models are open-weight and run locally — CLIP, BGE-base, and Phi-3 Mini. The archive does not use any model that trains on user data.

Visual Similarity

Each archive entry displays visually similar works, generated by a local CLIP model running on Fly.io infrastructure in Toronto. Embeddings are computed from archive images and compared at query time. No image data leaves the archive's own servers.

Metadata Suggestions

A local Phi-3 Mini model generates candidate metadata per entry: Dublin Core type, physical format, language, rights statements, Getty AAT subject terms, and CHIN Nomenclature classifications, along with a short interpretive summary. Every suggestion enters a human review queue. Nothing is published without curator approval.

Semantic Search

When you search the archive, your query is embedded by a local BGE-base model running in Toronto and matched against pre-computed vectors for all 14,000 entries — entirely on our own infrastructure. An optional one-sentence interpretation of your query is generated by Phi-3 Mini, also running in Toronto. No query text reaches any third-party AI service.

Further policies — on collection scope, digitisation, and institutional access — will be added here as they are developed.

Membership

Nor depends on its members. $8/month keeps the archive alive.

Newsletter

Stay in the loop. New work, events, occasional notes from the co-op.

CollectionAboutContact
APIMCPEmbedPoliciesPrivacyTerms