Osprey User Guide
Network Visibility & Engineering Platform
This guide covers everything from first login to advanced administration. Osprey discovers network topology via IGP adjacencies (OSPFv2, OSPFv3, IS-IS) and Layer 2 neighbors (LLDP/CDP), monitors traffic via SNMP, and visualizes everything on an interactive canvas with diagnostic reports, what-if simulation, and time travel — all from a single web UI.
Table of Contents
- Getting Started
- Dashboard
- Setting Up Topology Discovery
- The Topology Canvas
- View Controls
- Device & Link Inspection
- Reports
- Tools
- Alerts & Incidents
- SSH Terminal
- Administration
- Keyboard Shortcuts
- Configuration Reference
- Troubleshooting
1. Getting Started
Installation
Debian Package (recommended for production)
sudo apt install ./osprey_<version>_amd64.deb
The installer handles everything automatically:
- Installs NATS (from distro repos or GitHub releases), PostgreSQL, and nginx as dependencies
- Creates the
ospreysystem user and database - Generates a random database password, JWT secret, and encryption key (stored in
/etc/osprey/osprey.env) - Creates a self-signed TLS certificate (valid for 10 years, stored in
/etc/osprey/certs/) - Runs all database migrations
- Enables the nginx site and removes the default site to avoid port conflicts
- Starts all five Osprey services via systemd (
osprey-engine,osprey-api,osprey-collector-manager,osprey-snmp-poller,osprey-bmp-server) underosprey.target
After installation you will see a summary:
Osprey installed successfully.
Web UI: https://localhost/
Login: admin / admin
Config: /etc/osprey/osprey.yaml
Secrets: /etc/osprey/osprey.env
Status: systemctl status osprey.target
Tip: The
.debpackage works on Debian 12 (Bookworm), Debian 13 (Trixie), and Ubuntu 24.04+.
LXC Containers (Proxmox)
Osprey runs in both privileged and unprivileged LXC containers.
Privileged LXC — no special configuration needed. Install the .deb package as on bare metal.
Unprivileged LXC — requires nesting for systemd. Add to /etc/pve/lxc/<CTID>.conf:
features: nesting=1
GRE collectors create tunnel interfaces via netlink and capture packets with raw sockets. On kernel 6.x, CAP_NET_ADMIN and CAP_NET_RAW within the container's user namespace are sufficient — both are kept by default in Proxmox unprivileged containers. If GRE tunnel creation fails with a permission error, AppArmor may be blocking netlink operations. Resolve by adding:
lxc.apparmor.profile: unconfined
SNMP-only deployments (no GRE tunnels) work in unprivileged containers without any extra configuration beyond nesting=1.
First Login
Open your browser to https://your-server/ (port 443). Accept the self-signed certificate warning. Log in with the default credentials:
| Field | Value |
|---|---|
| Username | admin |
| Password | admin |
Important: On first login, a mandatory password change dialog appears -- you must change the default password before accessing any other feature. Enter the current password (
admin), then choose a new password that meets the security policy (minimum 8 characters, uppercase, number, and special character by default). After changing the password, you are logged in normally. A yellow warning banner also appears at the top of the screen whenever the defaultadminusername is in use, with a Change Password inline form. The banner can be dismissed for the current session but reappears on next login. You can also change passwords via Admin > Users & Security > Users.
Understanding the Hierarchy
Osprey organizes network data in a hierarchy:
Network → Autonomous System → Routing Domain → Protocol Instance → Area → Devices/Links
- Network: Top-level organizational boundary (e.g., "Production", "Lab").
- Autonomous System: BGP AS number (auto-created, hidden in the UI).
- Routing Domain: Global routing table, VRF, or L3VPN (a "default" domain is auto-created with each network).
- Protocol Instance: An IGP process. For OSPF:
router ospf 1(shown as "OSPF 1" or "OSPFv3 1 (v6)"). For IS-IS:router isis CORE(shown as "IS-IS CORE"). Supports OSPFv2, OSPFv3 (with IPv6 or IPv4 address family), and IS-IS (ISO 10589). - Area: An OSPF area (e.g.,
0.0.0.0for the backbone) or an IS-IS level (Level 1 or Level 2).
In practice, the Autonomous System and default Routing Domain are created automatically when you add a network. The sidebar hides the AS level entirely, so the typical workflow is:
- Create a Network (e.g., "Production") -- this auto-creates a default AS (65000) and a default Routing Domain ("default", type global) behind the scenes.
- Add a Protocol Instance under the network (e.g., OSPF process 1 or IS-IS instance "CORE").
- Add an Area (OSPF area or IS-IS level) with a collector to start discovering topology (see Section 3).
You only need to create additional Routing Domains manually if you have VRFs or L3VPNs. Use the + icon on a network and select "Add Domain" for this.
Tip: Admin and engineer users see small action icons (add, edit, delete) when hovering over hierarchy items in the sidebar. Operator-role users can browse the hierarchy and view topology but cannot modify it.
2. Dashboard
When you first log in (or when no hierarchy item is selected in the sidebar), you see the Dashboard -- a six-card overview of your entire network. The dashboard is the default landing page.
Network Health
Shows system status and counts of key resources.
- Status indicator: A colored dot and label showing one of:
- Healthy (green) -- all infrastructure (DB, NATS) is reachable, the engine is running, all enabled collectors are running, and no areas are stale.
- Degraded (yellow) -- infrastructure is up but the engine is unresponsive, some collectors have issues, some SNMP targets are failing, or one or more areas are stale (not receiving updates).
- Unhealthy (red) -- the API, database, or NATS is unreachable.
- Checking (gray, pulsing) -- initial health check in progress.
- Counters: Networks, Areas, Devices, Links, Collectors (running/total), and SNMP Targets (active/total).
Click the status indicator in the bottom bar to open the System Health popover, which shows the status of all 7 services:
| Service | What it checks |
|---|---|
| API Server | Whether the health endpoint itself is reachable |
| Engine | NATS heartbeat (published every 15s) — shows freshness like "Healthy · 12s ago" |
| Collector Manager | NATS heartbeat — shows freshness |
| SNMP Poller | NATS heartbeat — combined with SNMP target count |
| Database | PostgreSQL ping with latency |
| Message Bus | NATS connection status |
| Live Updates | WebSocket connection to the API |
Heartbeat-based services (Engine, Collector Manager, SNMP Poller) show three states:
- Healthy (green) -- last heartbeat received within 45 seconds
- No heartbeat (yellow, pulsing) -- last heartbeat 45–120 seconds ago
- Down / Not responding (red, pulsing) -- no heartbeat received for over 120 seconds, or never seen
Active Alerts
Displays severity badges (critical, warning, info) and lists the top 5 firing alerts with colored severity dots. If there are no active alerts, the card shows "No active alerts."
Recent Events
A 24-bar sparkline histogram shows event frequency over the last 24 hours (one bar per hour). Below it, the 8 most recent topology events are listed with color-coded type badges (green for additions, red for removals, amber for changes) and relative timestamps (e.g., "5m ago", "2h ago").
Active Incidents
Shows the total correlated incident count with severity badges and summaries of the top 3 active incidents. Each incident displays its event count and time since last event. Incidents group related events (e.g., multiple link failures caused by a single device going down).
Network at a Glance
A clickable list of all networks with per-network statistics (areas, devices, links). Clicking a network loads its full topology on the canvas and opens the sidebar for navigation.
Tip: This is the fastest way to jump into a specific network's topology from the dashboard.
Top Utilized Links
Lists the 5 most utilized links across all networks with color-coded utilization bars:
- Green: below 80%
- Yellow: 80% to 94%
- Red: 95% and above
Parallel links between the same device pair are deduplicated (only the highest-utilized link is shown). This card requires SNMP traffic monitoring to be configured (see SNMP Traffic Monitoring). If no utilization data is available, the card shows "No utilization data."
The dashboard auto-refreshes every 30 seconds.
3. Setting Up Topology Discovery
Osprey discovers IGP topology through two methods: GRE tunnels (direct protocol adjacency) and SNMP polling (agentless discovery). Both OSPF (v2/v3) and IS-IS are supported with either method. Layer 2 adjacencies are discovered separately via LLDP/CDP — see L2 Neighbor Discovery below.
Creating the Hierarchy
Before adding collectors, create the network hierarchy in the sidebar.
Step 1 -- Create a Network:
- Open the sidebar by clicking the toggle tab on the left edge of the screen (or via View > Sidebar).
- Click the + icon next to the "Networks" heading at the top.
- Enter a name (e.g., "Production") and an optional description.
- Click Create.
This automatically creates a hidden Autonomous System (ASN 65000) and a default Routing Domain ("default", type global) behind the scenes. You do not need to create these manually.
Step 2 -- Add a Protocol Instance:
- Hover over the network name in the sidebar and click the + icon that appears.
- Select Add Protocol.
- Choose the protocol (OSPFv2, OSPFv3, IS-IS, or BGP) and enter an identifier:
- OSPF: Enter a process ID (e.g.,
1forrouter ospf 1). When OSPFv3 is selected, an address family selector appears (IPv6 default, IPv4 for RFC 5838 AF extensions). - IS-IS: Enter an instance tag (e.g.,
COREforrouter isis CORE). Shown in the sidebar as "IS-IS CORE". - BGP: Enter the local ASN (e.g.,
65000). Only one BGP protocol instance is allowed per routing domain. BGP instances do not have areas -- instead, you add BMP targets under them (see Setting Up BGP Monitoring). - Optionally add a description.
- OSPF: Enter a process ID (e.g.,
- Click Create.
Step 3 -- Add an Area with a Collector:
Areas (OSPF areas or IS-IS levels) are created together with their first collector through the Tunnel Quick-Add dialog. Hover over the Protocol Instance, click +, and select Add Area. This opens a dialog where you can specify the area ID (or IS-IS level), area type, and collector configuration all at once. See Method 1 or Method 2 below.
Alternatively, if you need additional routing domains (for VRFs or L3VPNs), hover over the network, click +, and select Add Domain.
Tip: You can also add collectors to existing areas. Hover over an area in the sidebar, click the + icon, and choose Add Recorder (for GRE) or Add Recorder (SNMP). An area can have both a GRE collector and an SNMP collector simultaneously.
Method 1: GRE Tunnel Discovery
GRE tunnels form a real IGP adjacency with your network. Osprey receives the full link-state database and tracks changes in real time. OSPFv2, OSPFv3, and IS-IS are all supported -- the Tunnel Quick-Add dialog adapts based on the parent protocol instance.
- Open the Tunnel Quick-Add dialog by either:
- Hovering over a Protocol Instance in the sidebar, clicking +, and selecting Add Area (creates both the area and the collector).
- Hovering over an existing Area in the sidebar, clicking +, and selecting Add Recorder (adds a GRE collector to an existing area).
- Select GRE as the discovery mode (this is the default).
- Fill in the required fields:
- GRE Remote: The router's WAN IP (GRE tunnel destination endpoint).
- GRE Local: The IP on your Osprey server that faces the remote router.
- Tunnel IP: The /30 or /31 point-to-point address for the tunnel interface (e.g.,
10.254.0.1/30for OSPFv2, orfe80::1/64for OSPFv3). - Tunnel Peer: The remote end's tunnel IP (e.g.,
10.254.0.2).
- Optionally expand the Advanced section to configure:
- Router ID: Osprey's OSPF router ID (auto-generated if left blank). Router IDs are always 32-bit dotted-decimal, even for OSPFv3. For IS-IS, this field is replaced by NET (Network Entity Title, e.g.,
49.0001.0192.0168.0001.00). - Hello/Dead intervals: Timer values (OSPF defaults: 10s hello, 40s dead; IS-IS defaults: 10s hello, 30s hold).
- Cost, Priority, MTU, TTL: Fine-tuning parameters. IS-IS uses wide metrics by default (range 1--16777215).
- Authentication: None, simple password, or MD5/SHA-HMAC with key ID. (OSPFv3 uses IPsec for authentication external to the protocol, so in-protocol authentication is hidden for v3 collectors. IS-IS supports HMAC-MD5.)
- Router ID: Osprey's OSPF router ID (auto-generated if left blank). Router IDs are always 32-bit dotted-decimal, even for OSPFv3. For IS-IS, this field is replaced by NET (Network Entity Title, e.g.,
- If creating a new area:
- OSPF: Enter the Area ID (e.g.,
0.0.0.0for backbone; you can also use integer notation like0which auto-converts to dotted-decimal) and Area Type (normal, stub, or NSSA). - IS-IS: Select the Level (Level 1, Level 2, or Level 1/2) from the dropdown.
- OSPF: Enter the Area ID (e.g.,
- Click Create. The collector-manager detects the new config within 10 seconds, creates the GRE tunnel, and starts adjacency formation.
Requirements: The Osprey server needs IP connectivity to the remote router. The collector-manager runs as root (GRE tunnels require NET_ADMIN and NET_RAW capabilities).
On the router side, configure a GRE tunnel back to Osprey and add it to the IGP. Example (Cisco IOS, OSPF):
interface Tunnel100
ip address 10.254.0.2 255.255.255.252
tunnel source <router-wan-ip>
tunnel destination <osprey-server-ip>
ip ospf 1 area 0
ip ospf cost 1000
ip ospf priority 0
Example (Cisco IOS, IS-IS):
interface Tunnel100
ip address 10.254.0.2 255.255.255.252
tunnel mode gre ip
tunnel source <router-wan-ip>
tunnel destination <osprey-server-ip>
clns router isis CORE
ip router isis CORE
isis circuit-type level-2-only
isis metric 16777215
Warning: Ensure the hello and dead/hold intervals match on both sides. A mismatch prevents adjacency formation. Osprey defaults to 10s hello / 40s dead for OSPF, and 10s hello / 30s hold for IS-IS. Set the IS-IS metric to the maximum (16777215) to prevent the tunnel from being used for transit traffic.
Method 2: SNMP Discovery
SNMP discovery polls routers via SNMPv2c or v3 to walk the link-state database MIB. No tunnel configuration needed on the router -- just ensure SNMP is enabled and reachable from the Osprey server. Supported MIBs: OSPFv2 (RFC 1850), OSPFv3 (RFC 5643), and ISIS-MIB (RFC 4444). For IS-IS, Osprey auto-detects Cisco proprietary ISIS-MIB OIDs and falls back to them when the standard MIB is unavailable.
- Open the Tunnel Quick-Add dialog by either:
- Hovering over a Protocol Instance in the sidebar, clicking +, and selecting Add Area.
- Hovering over an existing Area, clicking +, and selecting Add Recorder (SNMP) (pre-selects SNMP mode).
- Select SNMP as the discovery mode.
- Enter the Target IP (the router's management IP address).
- Select SNMP version and enter credentials:
- v2c: Community string (e.g.,
public) - v3: Username, auth protocol (MD5/SHA), auth password, privacy protocol (DES/AES), privacy password
- v2c: Community string (e.g.,
- Optionally configure:
- Poll Interval: How often to poll (in seconds).
- Crawl ABRs/L1L2: Enable BFS crawl to automatically discover IGP neighbors and build the full topology from a single seed device. For IS-IS, this crawls L1/L2 routers across levels.
- If creating a new area, enter the area ID and type.
- Click Create.
Tip: Credential Profiles let you save named SNMP credential templates for reuse across multiple collectors and SNMP targets. Create them under Admin > Monitoring > Credential Profiles.
Collector Status
Each collector shows a colored status dot next to its area in the sidebar:
- Green (running): Actively discovering topology and receiving data.
- Blue pulsing (discovering): SNMP collector is actively crawling/discovering neighbors.
- Yellow pulsing (starting): Initializing GRE tunnel or SNMP session.
- Gray pulsing (pending): Configuration saved but not yet picked up by the collector-manager.
- Gray solid (stopped): Disabled by user.
- Red (error): Failed -- hover over the status dot to see the collector name and status. Click the area to expand and view the full error message.
Hover over an area with a collector to reveal action icons:
- Edit (pencil icon): Modify the collector configuration. Changes increment the config version, causing the collector-manager to automatically restart the collector with the new settings.
- Toggle (pause/play icon): Enable or disable the collector. The collector-manager handles start/stop within 10 seconds.
Tip: When multiple collectors exist on the same area (e.g., both GRE and SNMP), each shows its own status dot. Click the area to expand and see details for each collector individually.
SNMP Traffic Monitoring
Separately from topology discovery, Osprey polls device interfaces for traffic statistics (utilization, errors, discards). This is handled by the SNMP Poller service, which is independent of the topology collectors.
Set up SNMP targets under Admin > Monitoring > SNMP Targets:
- Click Add Target.
- Enter the device management IP and SNMP credentials (or select a credential profile).
- The SNMP poller begins collecting interface counters at the configured interval (default 5 minutes, configurable in Admin > System Settings > SNMP).
Targets that fail 10 consecutive polls are automatically disabled to prevent wasted resources. You can re-enable them manually after fixing the underlying issue.
Pausing the SNMP Poller: The SNMP Targets manager has a Stop/Start SNMP Poller button that pauses all regular SNMP polling (discovery + counters) across all targets. When paused, zero SNMP traffic is generated -- useful for maintenance windows or troubleshooting. On-demand boost mode (clicking a link for live traffic) still works while the poller is paused. Click the button again to resume normal polling.
Traffic data enables:
- Utilization coloring on the topology canvas (View > Color > By Utilization)
- Link Detail Panel traffic charts (with automatic 5-second boost polling while the panel is open)
- Congestion alerts (80% warning, 95% critical -- seeded by default)
- Top Utilized Links card on the dashboard
- Congestion Trend diagnostic report (Reports > Diagnostics > Congestion Trend)
- MTU mismatch detection (Reports > Diagnostics > MTU Mismatch) — detects interface MTU mismatches across link endpoints via SNMP IF-MIB
L2 Neighbor Discovery (LLDP/CDP)
Osprey discovers Layer 2 adjacencies via SNMP walks of the LLDP-MIB (IEEE 802.1AB) and CDP-MIB tables. This runs automatically alongside IGP topology discovery (OSPF or IS-IS) when enabled.
How it works:
- The SNMP poller walks LLDP-MIB on each target device. If LLDP data is unavailable (e.g., older IOS devices), it falls back to CDP-MIB per device.
- Discovered L2 neighbors are stored with chassis ID, port ID, system name, management addresses, and platform description.
- LLDP and CDP capability bitmaps are normalized to a unified scheme (bridge=0x04, router=0x10). Only neighbors with bridge and/or router capability are stored; endpoints without these capabilities are excluded.
- Platform-based filtering automatically excludes wireless access points (Cisco AIR-/Aironet/C91xx, Aruba, Meraki, Ubiquiti, Ruckus) and Cisco UCS Fabric Interconnects (identified by "U: Uplink" / "S: Server" port descriptions) even when they advertise bridge capability.
- UTF-8 sanitization strips NUL bytes and invalid sequences from SNMP string values.
- Stale neighbors (not seen within the configured expiry window) are automatically cleaned up.
Configuration:
L2 discovery settings are under Admin > System Settings > SNMP > L2 Discovery:
| Setting | Default | Description |
|---|---|---|
| L2 Enrichment | off | Master toggle for LLDP/CDP neighbor discovery on monitored routers |
| Switch Crawling | off | BFS crawl to discover neighboring switches via LLDP/CDP. Phones, APs, and endpoints are excluded. |
| Crawl Interval (minutes) | 360 | How often the L2 crawler runs (6 hours) |
| Max Crawl Depth | 3 | Maximum BFS hop depth from nearest OSPF router seed |
| Neighbor Expiry (hours) | 72 | Hours before unseen neighbors are pruned (3 days) |
These global settings apply to all networks by default. Individual networks can override them -- see Per-Network L2 Configuration below.
Device identity resolution:
Osprey automatically resolves L2 neighbors to existing devices in the topology using a 5-tier cascade:
- Chassis ID -- matches the remote chassis ID against
device.chassis_id(collected vialldpLocChassisId) - SNMP target management IP -- matches the remote management IP against configured SNMP targets
- Router ID -- matches the remote management IP against device router IDs
- Interface IP -- matches the remote management IP against known interface IPs
- Hostname -- case-insensitive match of the remote system name against device hostnames
Resolution runs at poll time (per-device) and via a background sweep every 5 minutes.
L2 canvas overlay:
When L2 data is available, click the L2 toggle button in the toolbar to display Layer 2 switches and links on the canvas alongside the IGP topology. See L2 Topology Overlay below.
Note: L2 neighbor discovery requires SNMP targets to be configured. It runs during the regular SNMP discovery interval (default 6 hours).
L2 Topology Overlay
The L2 toggle button in the toolbar renders discovered switches and L2 adjacencies on the Cytoscape canvas as an overlay alongside the L3 IGP topology (OSPF or IS-IS).
What it shows:
- L2 switches appear as dashed-border nodes, positioned near their connected L3 routers.
- L2 links appear as dashed gray edges between switches and between switches and routers.
- Router-to-router L2 edges are suppressed -- only edges to L2-only devices are shown (avoids duplicating the existing OSPF links).
Interaction:
- Click an L2 edge to open the L2 Link Detail Panel, showing LLDP/CDP adjacency details, capabilities, and protocol.
- Right-click context menu guards prevent actions that do not apply to L2-only devices (RIB, SSH, SPF tree).
- L2 topology auto-refreshes via WebSocket when changes are detected.
Per-Network L2 Configuration
Each network can override the global L2 discovery settings. This is useful when different networks require different SNMP credentials, or when you want L2 discovery enabled for some networks but not others.
Configuring per-network overrides:
- In the sidebar, click on a network name to expand it, then click the pencil icon to edit.
- Below the name and description fields, the L2 Configuration section appears (admin only).
- Each toggle uses a tri-state cycle: click to cycle through ON, OFF, and Default:
- ON (green) -- explicitly enabled for this network, regardless of the global setting.
- OFF (red) -- explicitly disabled for this network, regardless of the global setting.
- Default (gray) -- inherits from the global system setting. Shows "Default (on)" or "Default (off)" to indicate the effective value.
Available per-network overrides:
| Setting | Description |
|---|---|
| LLDP/CDP Enrichment | Override the global L2 enrichment toggle for this network |
| Switch Crawling | Override the global switch crawl toggle for this network |
| Primary Credential | SNMP credential profile for L2 crawling in this network (overrides the global default) |
| Fallback Credential | Fallback credential profile for this network (overrides the global fallback) |
The credential profile selectors appear only when credential profiles have been created (see Admin > SNMP Credential Profiles). When set to "Global default", the network uses whatever profile is configured in the global SNMP settings.
Precedence: Per-network settings always take priority over global system settings. When a per-network toggle is set to Default (null), the SNMP poller falls back to the global l2.enrichment_enabled and l2.crawl_enabled system settings.
L2 data management:
Right-click a network in the sidebar for data management options:
- Reset L2 Data -- deletes all L2 neighbor and discovered neighbor data for the network. Useful after credential changes or when L2 data has become stale.
- Reset Crawler Queue -- clears pending crawl targets for the network without affecting existing L2 neighbor data.
Multi-Protocol Devices
The same physical device can run both OSPF and IS-IS simultaneously. Osprey automatically correlates devices across protocols using TE Router ID (IS-IS TLV 134) and SNMP sysName matching. Each protocol is managed under its own protocol instance in the sidebar.
The canvas currently shows one protocol instance at a time -- select the desired OSPF process or IS-IS instance in the sidebar. Multi-protocol overlay (viewing OSPF and IS-IS topology simultaneously) is planned for a future release.
IS-IS Event Types
IS-IS topology changes generate events similar to OSPF but with IS-IS-specific types:
- LSP Update: An IS-IS Link State PDU was updated (analogous to OSPF LSA Update).
- LSP Purge: An LSP was purged from the LSDB (analogous to LSA MaxAge).
- IS-IS Adjacency Change: An IS-IS adjacency transitioned state (up/down).
- DIS Change: The Designated Intermediate System changed on a broadcast segment (analogous to OSPF DR Change).
These events appear in the Activity Tray, event history, and incident correlation alongside OSPF events.
Setting Up BGP Monitoring (BMP)
Osprey receives BGP routing data via BMP (BGP Monitoring Protocol, RFC 7854). Your routers push BMP messages to Osprey's BMP server -- no polling required. This gives you visibility into BGP peers, the RIB (best-path table), and AS path analysis.
Prerequisites:
- The network hierarchy (Network, AS, Routing Domain) must already exist -- see above.
- Your router must support BMP and be configured to send BMP to Osprey's IP on port 11019 (TCP).
Step 1 -- Configure BMP on the router:
On Cisco IOS-XR:
bmp server 1
host 198.51.100.10 port 11019
flapping-delay 60
!
router bgp 65000
bmp server 1
route-monitoring policy post inbound
On Arista EOS:
router bgp 65000
neighbor 10.1.0.1
bmp activate
!
management api bmp
host 198.51.100.10 port 11019
Step 2 -- Create a BMP target in Osprey:
First, create a BGP protocol instance in the sidebar:
- Hover over a network and click + > Add Protocol.
- Select BGP as the protocol.
- Click Create.
Then add a BMP target:
- Expand the BGP protocol instance in the sidebar.
- Click the + icon and select Add Target.
- Enter a name, the router's management IP, and select a RIB mode:
loc_rib-- Best paths as computed by the router itself (most common)adj_rib_in_post-- All paths received from all peers (post-policy)none-- Accept BMP session but skip RIB processing
- Click Create.
BMP targets can also be managed via the REST API:
- Create:
POST /api/v1/bgp/targets - List targets:
GET /api/v1/bgp/targets?pi_id=... - Update:
PUT /api/v1/bgp/targets/{id} - Toggle enable/disable:
PUT /api/v1/bgp/targets/{id}/toggle - Delete:
DELETE /api/v1/bgp/targets/{id}
Step 3 -- Verify connectivity:
The target's status indicator in the sidebar changes:
- Pending (grey) -- Waiting for the router to connect
- Connected (green) -- BMP session established, receiving data
- Error (red) -- Connection failed (check router config, firewall, port 11019)
Once connected, BGP peers appear within seconds. Routes populate as the initial RIB dump completes (End-of-RIB marker). Right-click a BMP target in the sidebar to view its peers or routes directly.
Step 4 -- View BGP data:
- BGP Peers: Reports > Routing > BGP Peers (or right-click a BMP target > View Peers)
- BGP Routes: Reports > Routing > BGP Routes (or right-click a BMP target > View Routes)
- Live updates: Peer state changes (up/down) and route count changes are pushed to the browser via WebSocket -- no page refresh needed.
Tip: For route reflector deployments, configure BMP on the RR. The RR's Loc-RIB reflects all client routes, giving full AS-wide visibility from a single BMP session.
4. The Topology Canvas
The canvas is the heart of Osprey. It shows routers as nodes and IGP adjacencies (OSPF or IS-IS) as links, arranged in an interactive graph. Changes are pushed via WebSocket in real time -- when a device or link changes state, affected elements briefly flash with a highlight animation and a toast notification appears summarizing the update (e.g., "Topology updated: 2 devices, 3 links changed").
Navigation
| Action | How |
|---|---|
| Pan | Click and drag on empty canvas |
| Zoom | Mouse wheel (scroll up = zoom in) |
| Zoom in/out | Click the +/- icons in the menu bar (top-right) |
| Fit to screen | Click the fit-to-screen icon in the menu bar (top-right) |
| Select device | Left-click a node (opens Node Detail Panel) |
| Select link | Left-click an edge (opens Link Detail Panel) |
| Context menu | Right-click a node or link |
| Search | Ctrl+K / Cmd+K (or Tools > Search) |
| Deselect | Left-click empty canvas (deselects; panels remain open) |
| Export | Topology > Export as PNG or Topology > Export as SVG. Exports use the current theme background color. If the grid is enabled (View > Grid), it is included in the export. |
Selecting an Area
Use the sidebar (left panel) to navigate the hierarchy. Click an area (OSPF area or IS-IS level) to load its topology on the canvas. You can check/uncheck multiple areas within the same protocol instance to view them together. IS-IS levels appear in the sidebar as "Level 1" and "Level 2" instead of dotted-decimal area IDs.
The sidebar is resizable -- drag its right edge. Toggle it via View > Sidebar.
Layout
Osprey uses force-directed layout by default. The bottom bar contains the layout toolbar on the left side and status indicators on the right side.
Layout algorithm dropdown (four options):
- Force-Directed (default): fCoSE physics-based layout. Best general-purpose choice.
- Geometric: Manhattan-style grid placement using BFS ordering from a reference fCoSE layout. Produces clean right-angle layouts.
- Octilinear: Like Geometric but allows 45-degree diagonal edges. Minimizes edge crossings with iterative refinement across multiple root candidates.
- Circle: Arranges all nodes in a circle.
Layout actions in the bottom bar:
- Re-layout button: Re-run the current algorithm on the entire canvas.
- Layout picker dropdown: Switch between named saved layouts. Active layout is highlighted. Layouts marked "(default)" are the area default.
- Save button: Overwrite the active layout with current positions. An orange dot appears when positions have been modified ("dirty" state).
- Save As (+) button: Save current positions as a new named layout. Type a name in the inline text field and press Enter.
- Revert button: Restore the active layout's saved positions. Only enabled when the layout is dirty and an active layout exists.
- Delete layout: Admins can delete saved layouts via the X button in the picker dropdown.
Snap-to-grid: Enable the "Snap" checkbox in the bottom bar to lock dragged nodes to a grid. Choose grid spacing (10px, 20px, or 40px) from the adjacent dropdown.
Drag nodes to adjust their position. Layout positions are scoped to the protocol instance level, so positions saved in one area view are shared when switching to another area under the same OSPF process or IS-IS instance.
MiniMap
A minimap overlay appears in the bottom-right corner of the canvas when enabled. It shows a bird's-eye view of the entire graph with:
- Colored dots for nodes (blue for normal, orange for ABRs, red for down devices)
- Gray lines for edges
- A blue rectangle indicating the currently visible viewport area
Click or drag on the minimap to pan the main canvas to that location.
Node Appearance
Nodes display:
- Icon: Based on device role (router, ABR/L1L2, ASBR, ABR+ASBR, collector) in the selected icon pack. Both builtin packs provide dedicated icons for each role. IS-IS L1/L2 routers use the same icon as OSPF ABRs. In "By Area" color mode, icons are recolored to match the area's assigned color.
- Label: Depends on the View > Node Labels setting (hostname, DNS, router ID, etc.).
- Border color: Depends on the View > Color setting (area color, metric cost, utilization).
- Size: Adjustable via View > Size (up, down, reset). Edge label font sizes scale proportionally with node size.
- Status overlays: Stale devices appear visually degraded. Changed elements briefly flash with a highlight animation for 3 seconds after a topology update.
Link Appearance
Links display:
- Line style: Solid for up, dashed for down.
- Color: Depends on color mode (area, cost gradient, utilization heatmap).
- Labels: Toggled independently -- cost, interface names, IP addresses (View > Link Labels). Asymmetric costs are flagged with a visual indicator. IS-IS wide metrics (up to 16M) are abbreviated on cost labels: 10K, 1.2M, 16.7M.
- Multi-protocol merging: When multiple protocols (OSPFv2, OSPFv3, IS-IS) share the same physical interface between two devices, their links merge into a single edge on the canvas. Merged edges are slightly thicker (2.5px for 2 protocols, 3px for 3+) and show a
[2P]or[3P]suffix after the cost label. Hover a merged edge to see a compact tooltip with each protocol's area, cost, and state. Merging requires SNMP enrichment to identify the shared physical interface (ifIndex). Without SNMP, each protocol's links appear as separate edges.
Hiding Nodes
You can hide devices from the canvas:
- Right-click > Hide: Hides a single device.
- View > Filters > Hide Leaf Nodes: Hides devices with one or fewer links.
- View > Filters > Hide Failed Nodes: Hides isolated/unreachable devices.
- View > Filters > Hide Unconnected: Hides devices with zero connected edges.
- View > Filters > Vendor: Filter to show only a specific vendor. A submenu lists all discovered vendors; select one, or choose "All Vendors" to reset.
- View > Filters > Role: Filter to show only ABRs or ASBRs. Select "All Roles" to reset.
- View > Filters > Unhide All: Resets all filters at once (also available in the bottom bar).
Hidden nodes are indicated by an "Unhide (N)" badge in the bottom bar. Click it to restore all hidden nodes.
Collectors on the Canvas
Collector nodes appear on the canvas as special nodes connected to the devices they monitor. Collectors are shown as muted, dashed-border nodes and are automatically positioned near their monitored devices.
Area Boundaries
Toggle View > Area Boundaries to draw colored translucent hull overlays around each OSPF area's or IS-IS level's devices, visualizing area containment. The hulls automatically redraw as you pan, zoom, or move nodes. This setting persists across sessions via user settings.
Grid Background
Toggle View > Grid to show a dot-grid background on the canvas. This is a visual aid for manual node placement and works independently of the snap-to-grid feature in the bottom bar.
5. View Controls
All view settings are accessed from the View menu in the menu bar.
Node Labels
| Mode | Shows |
|---|---|
| System Default | Uses the admin-configured display name mode (see Admin > System Settings > Display) |
| SNMP Hostname | Device hostname from SNMP sysName |
| DNS Hostname | Reverse DNS (PTR) name |
| Router ID | OSPF router ID (IPv4 address) or IS-IS system ID (XXXX.XXXX.XXXX) |
| Hostname (IP) | Hostname with router ID in parentheses |
| Area ID | OSPF area ID for the device's primary area |
| No Labels | Hides all node labels |
Link Labels
Three independent toggles (can be combined):
- Cost: IGP metric cost (forward/reverse if asymmetric). OSPF cost or IS-IS wide metric.
- Interface Names: SNMP-discovered interface names (abbreviated, e.g.,
Gi0/0/1). - IP Addresses: Endpoint IPs with CIDR notation.
Color Modes
| Mode | Description |
|---|---|
| By Area | Each OSPF area or IS-IS level gets a distinct color. Builtin pack icons are recolored to match; imported packs keep original colors. Edges always colored by area. |
| By Metric Cost | Gradient from green (low cost) to red (high cost) |
| By Utilization | Heatmap based on SNMP traffic data (green 0% to yellow 50% to red 100%) |
| Uncolor | All nodes/links use default neutral styling |
Utilization coloring requires SNMP targets to be configured and polling. It auto-refreshes every 15 seconds.
Themes
Osprey supports seven color themes. Select via View > Theme:
| Theme | Description |
|---|---|
| Dark | Default dark theme |
| Retro | Windows 95 retro with white document area |
| High Contrast | Maximum contrast for accessibility (WCAG AAA) |
| Alphabet | Light sky-blue blueprint |
| Midnight | Deep desaturated blue-grey with steel teal accent |
| Horizon | Warm neutral light with professional blue accent |
| Morning | Soft warm light with sunrise palette |
Themes use CSS custom properties with semantic design tokens. All components -- including the Cytoscape canvas and the SSH terminal -- automatically adapt.
Icon Packs
Change device icons via the icon-pack dropdown in the bottom bar:
- Clarity (default): Project Clarity (MIT license) router icons with cutout arrows. ABRs, ASBRs, and ABR+ASBR dual-role routers each get a distinct badge icon (dot, star, star-in-dot). Transparent background — area colors fill the icon circle directly.
- Cisco: Traditional Cisco-style network icons loaded from SVG files. ABRs, ASBRs, and ABR+ASBR routers use dedicated Cisco icon variants.
- Imported packs: Custom icon packs imported from Visio stencil files (.vssx). Available after import via the Icon Library.
Area coloring: Builtin packs (Clarity, Cisco) recolor icons by OSPF area or IS-IS level. Imported packs preserve their original brand colors on the canvas — only edges are colored by area.
The selected icon pack, per-device-type defaults, and per-device icon overrides are all carried into Visio exports — the exported .vsdx matches the canvas appearance.
Importing Custom Icon Packs (Admin)
- Click "Manage..." in the icon pack dropdown (or Admin > Icon Library).
- Click "Import Stencil" in the Icon Library panel.
- Select a
.vssxVisio stencil file. The file is uploaded and each master shape is converted to SVG. - The new pack appears in the toolbar dropdown and can be selected immediately.
Vendor stencils from Cisco, Juniper, Arista, Fortinet, and others are supported. Both geometry-based shapes and EMF-embedded icons are converted.
Per-Device Icon Overrides
Right-click any device on the canvas to access icon overrides:
- Change icon... — Opens the Icon Library panel. Select any icon from any pack to override this device's icon.
- Reset icon — Removes the per-device override, reverting to the pack's default icon for this device type.
Overrides are stored in your user settings and persist across sessions.
Panel Toggles
The View menu also contains toggles for UI panels:
- Grid: Show/hide the dot-grid background.
- Area Boundaries: Show/hide colored area hull overlays.
- L2 Devices: Show/hide the L2 topology overlay (only appears when L2 data is available).
- Sidebar: Show/hide the left hierarchy navigation panel.
- Activity Tray: Show/hide the bottom-left event/incident activity panel.
Activity Tray
The Activity Tray is a collapsible panel anchored to the bottom-left of the screen. It provides real-time visibility into topology events, alerts, and application logs. Resize it vertically by dragging its top edge.
The tray has two tabs for all users, and a third admin-only tab:
- Events: Live topology events (device/link additions, removals, state changes) and correlated incidents. Events arrive via WebSocket in real time. New events trigger a badge count and the list auto-scrolls to the latest entry.
- Alerts: Currently firing and acknowledged alerts with severity badges. Acknowledge or resolve alerts directly from this tab.
- Logs (admin only): Real-time application log viewer streaming logs from all Osprey services (engine, API, collector-manager, SNMP poller, collectors). Includes level filtering (debug/info/warn/error), service filtering, and text search. Consecutive identical log messages are collapsed with a "(message repeated N times)" counter to reduce noise. Auto-scroll can be toggled on/off.
Tip: The Logs tab replaces the need for
journalctlfor day-to-day operational debugging directly from the browser.
6. Device & Link Inspection
Node Detail Panel
Click any device on the canvas to open the Node Detail Panel -- a floating, draggable, resizable panel managed by the Panel Manager. It appears over the canvas and can be repositioned, minimized to the pill bar, or resized via dual-edge handles (see Panel Management below).
Header (PanelChrome): Shows the device icon (colored to match the device's area color from the canvas) and the resolved display name as the primary label. For OSPF devices, the router ID is the secondary label (if different from the display name). For IS-IS devices, the system ID (XXXX.XXXX.XXXX format) is the secondary label, with the router ID demoted. The DNS name appears as a tertiary label if available and different from both. The title bar provides minimize and close buttons. Close with the X button, press Escape to minimize, or Shift+Escape to close.
Critical state banner: When all links on a device are down (isolated node), a red "CRITICAL -- Device Down" banner appears below the header. This is derived from live topology state, not from alert rules, and also works during time-travel playback.
Device Info section:
- Device Type, Vendor, Model, Platform, Software Version
- Role badges (ABR, ASBR, L1/L2, Collector) displayed as colored tags. IS-IS L1/L2 routers show the same badge style as OSPF ABRs.
- For IS-IS devices: System ID (XXXX.XXXX.XXXX format) shown below the router ID. Overload (OL) bit displayed as an amber "Overload" badge when set.
- Area memberships shown as monospace tags
- First Seen / Last Seen with relative timestamps (hover for full UTC timestamp)
Alerts section (collapsible):
- Shows non-resolved alerts (firing + acknowledged) for this device, up to 25
- Each alert displays a severity-colored dot, summary text, and relative timestamp
- Collapsed by default if there are no active alerts
Neighbors section:
- List of directly connected IGP neighbors (OSPF or IS-IS), sorted by router ID
- Each neighbor row shows: state indicator (green/red dot), router ID, link type (IS-IS shows "Broadcast" instead of "transit"), and cost (single value for symmetric, directional arrows for asymmetric)
- For IS-IS: neighbors visible in both L1 and L2 are merged into a single entry with level badges (L1/L2) instead of appearing twice
- If the neighbor has a hostname different from its router ID, it appears as a secondary line
- Click any neighbor to navigate to its Node Detail Panel
Traffic section (when SNMP is configured):
- Collapsible via a toggle in the section header
- Aggregate summary line: active/total interfaces, max utilization, total in/out rates
- Per-interface table sorted by highest utilization first, showing: interface name, in rate, out rate, utilization bar with percentage, and error count badge if non-zero
- Hover over the interface name to see the interface speed
- Automatically activates 5-second SNMP boost polling while the panel is open (with a 30-second heartbeat). The panel polls for updated traffic data every 10 seconds.
- Click any interface row to expand an inline utilization history chart (SVG area chart) with a period selector (24h, 7d, 30d). The chart is color-coded from green (low) through yellow and orange to red (high utilization), with an 80% threshold line and average utilization marker.
Recent Events section (collapsible, at the bottom):
- Last 10 topology events for this device, fetched from the API on panel open
- Each event shows a color-coded type badge (green for added/up, red for removed/down, yellow for changed), the entity type, and a relative timestamp
Quick action buttons (pinned at the bottom of the panel):
- SSH: Open an SSH terminal session to this device
- Route Src: Set this device as the SPF path source
- Route Dst: Set this device as the SPF path destination
Note: Additional actions (Show events, Show routing table, Show neighbors) are available via the right-click context menu, not from the panel footer.
Link Detail Panel
Click any link on the canvas to open the Link Detail Panel -- a floating, draggable, resizable panel managed by the Panel Manager (same behavior as the Node Detail Panel).
Header (drag handle): Shows the link state (green/red dot), source and target hostnames connected by an arrow, and router IDs beneath if hostnames are available. Drag the header bar to reposition the panel anywhere on screen.
Critical state banner: When the link state is down, a red "CRITICAL -- Link Down" banner appears below the header. This is derived from live topology state, not from alert rules, and also works during time-travel playback.
Link metrics: Link type (P2P, broadcast, etc.) and IGP cost (OSPF cost or IS-IS metric). Asymmetric costs are displayed with both forward and reverse values plus an "asym" warning badge.
Multi-protocol tabs (merged edges only): When a link carries multiple protocols (e.g., OSPFv2 + OSPFv3 + IS-IS on the same wire), a tab bar appears below the header with a Physical tab and one tab per protocol. Each protocol tab shows a state dot (green/red). Single-protocol links display the standard layout without tabs.
- Physical tab (default): Protocol comparison table showing each protocol's area, cost, and state. Endpoint identities (A/Z with hostname and router ID). Shared physical metadata: DNS, MTU, first seen.
- Per-protocol tabs (e.g., "OSPFv2", "IS-IS"): Protocol-specific addressing (IPv4/IPv6 for OSPF, system ID for IS-IS), cost, link type, and auth type. Timer mismatches are shown as inline warnings only when the two sides disagree — no timer section when everything matches. Detail is lazy-loaded per tab.
Endpoints (A/Z) (single-protocol links): Each endpoint is displayed in a compact card showing:
- Endpoint label (A or Z), device hostname (clickable -- navigates to the Node Detail Panel), and router ID
- Interface name, IP address with CIDR mask, and interface alias/description if available
- Interface speed displayed on the right
IGP Timers (single-protocol links): A side-by-side comparison table of timer settings for both endpoints. For OSPF: Hello Interval, Dead Interval, Auth Type, Network Type, Cost, and Speed. For IS-IS: Hello Interval, Hold Time, Metric, Circuit Type, and Speed. Mismatches are highlighted in amber with a warning icon. Requires the SNMP poller to have walked the OSPF-MIB or ISIS-MIB interface table.
Traffic section (when SNMP is configured):
- Bidirectional display: A-to-Z direction (source device's outbound) and Z-to-A direction (target device's outbound)
- Each direction shows: device names with a directional arrow, real-time rate in bps, a sparkline chart with historical data points, and PPS count
- Combined utilization bar showing the maximum utilization of both endpoints
- Traffic data is updated via REST polling and WebSocket push, with automatic 5-second SNMP boost while the panel is open
- Utilization History: A period-selectable chart (24h, 7d, 30d) showing utilization trends for both endpoints. Color-coded area chart with threshold and average markers.
Errors section (shown only when errors or discards are detected):
- Per-endpoint error and discard counters: In Errors, Out Errors (with rate per second), In Discards, Out Discards
- Highlighted with a red background and border for visibility
Alerts section (collapsible, lazy-loaded):
- Shows non-resolved alerts (firing + acknowledged) for the link's endpoint devices, up to 25
- Click to expand; alerts are fetched on first expansion only
Events section (collapsible, lazy-loaded):
- Click to expand. Events are fetched from the API on first expansion only.
- Shows up to 10 recent events for both endpoint devices, deduplicated and sorted by time
Quick action buttons (pinned at the bottom):
- Inspect [source name]: Open the source device's Node Detail Panel
- Inspect [target name]: Open the target device's Node Detail Panel
Resize: Drag any edge or corner handle to resize the panel. Minimum size is 360x360 pixels; default is 504x624 pixels.
Close with the X button or press Shift+Escape. Press Escape to minimize to the pill bar.
Panel Management
Osprey v2.0 replaces the previous mutually-exclusive drawer/modal pattern with a desktop-style Panel Manager. All detail views, reports, tools, and terminals now open as floating managed panels that can coexist on screen simultaneously.
Key behaviors:
- Multiple panels: Up to 4 panels can be open at once. Opening a 5th panel automatically minimizes the least-recently-used panel to make room. Up to 16 total panels (open + minimized) are tracked; exceeding this limit auto-closes the oldest minimized panel.
- PanelChrome: Every managed panel has a consistent title bar (PanelChrome) with the panel title, a minimize button, and a close button. Drag the title bar to reposition the panel.
- Minimize / Restore: Click the minimize button (or press Escape) to collapse a panel to a compact pill in the pill bar at the bottom of the screen. The panel remains mounted with
display: none-- all internal state (scroll position, column configuration, form input) is preserved. Click the pill to restore the panel to its previous position and size. - Resize: Drag any edge or corner of a panel to resize it (dual-edge handles). Each panel type has its own minimum size constraint.
- Z-ordering: Click any panel to bring it to the front. Panels maintain a stacking order; the most recently interacted panel is always on top.
- Cascade positioning: New panels spawn at the right edge of the viewport and cascade leftward with 24px offsets, avoiding overlap with the sidebar and previously opened panels.
Keyboard shortcuts:
| Shortcut | Action |
|---|---|
Escape |
Minimize the focused panel |
Shift+Escape |
Close the focused panel |
Ctrl+Shift+M |
Minimize all open panels (clear the canvas) |
Ctrl+Shift+R |
Restore all minimized panels |
What is managed: All 26 report/list/tool views (Routers, Links, Interfaces, Prefixes, LSDB Browser, Neighbor Table, BGP Peers, BGP Routes, Health, IP Conflicts, SPF Tree, Congestion Trend, Timer Consistency, MTU Mismatch, Best Practices, etc.), the Node Detail Panel, Link Detail Panel, SSH terminal sessions, Activity panels, Simulation panels, Path Info, Traffic Graphs, and the Icon Library are all managed panels.
Traffic Graphs (MRTG-Style)
Per-interface historical traffic charts provide classic MRTG-style visualization of bandwidth usage over time. Access them from:
- Edge context menu: Right-click a link and select Traffic: <interface-name> -- one menu item appears per endpoint interface.
- Link Detail Panel: Click the Traffic A or Traffic Z buttons in the traffic section.
The chart renders as an inline SVG with the classic MRTG dual-area layout:
- Green area (above baseline): Inbound traffic rate.
- Blue area (mirrored below baseline): Outbound traffic rate.
- Time window selector: Choose from 24h, 7d, or 30d to adjust the visible history.
Below the chart, summary statistics are displayed:
| Metric | Description |
|---|---|
| Max In / Max Out | Peak inbound and outbound rates in the period |
| Avg In / Avg Out | Average inbound and outbound rates |
| Errors | Total in/out error count |
| Utilization | Peak utilization percentage |
Traffic graph data is sourced from the hourly-bucketed utilization_history table, populated by the SNMP poller. Longer time windows (7d, 30d) provide a broader trend view at hourly granularity.
Tip: Traffic graphs require SNMP targets to be configured and the SNMP poller to have collected at least one discovery+counter cycle. If no historical data is available, the chart area displays a "No data" message.
Context Menu (Right-Click)
Right-click a device to access a categorized context menu with section headers:
Inspect section:
| Action | Description |
|---|---|
| Inspect | Open the Node Detail Panel |
| Show events | Open Activity Tray filtered to this device |
| Show routing table | Open the IGP routing table viewer for this device (OSPF RIB or IS-IS routes) |
| Show neighbors | Open the Neighbor Table panel for this device |
| View timeline | Open chronological event history for this device |
| SPF tree from here | View the Dijkstra shortest-path tree rooted at this device (see SPF Tree) |
Routing section:
| Action | Description |
|---|---|
| Set as route source | Mark as SPF path source (green highlight) |
| Set as route destination | Mark as SPF path destination (red highlight) |
Layout section:
| Action | Description |
|---|---|
| Hide [device name] | Remove from canvas (reversible via Unhide All) |
| Re-layout neighbors | Reapply layout to this device and its direct neighbors using the fCoSE algorithm. If multiple nodes are selected, this becomes "Re-layout N selected" and applies to all selected nodes. |
Icons section:
| Action | Description |
|---|---|
| Change icon... | Override this device's icon from the Icon Library |
| Reset icon | Revert to the default icon for this device type |
Connect section:
| Action | Description |
|---|---|
| SSH to [device name] | Open an SSH terminal session to this device |
Export section (when 1+ nodes selected):
| Action | Description |
|---|---|
| Export N selected > PNG | Export selected nodes and mutual edges as a PNG image |
| Export N selected > SVG | Export selected nodes and mutual edges as a vector SVG |
| Export N selected > Visio | Export selected nodes and mutual edges as a .vsdx file |
Danger zone (below a separator):
| Action | Description |
|---|---|
| Delete [device name] | Permanently remove the device from the topology. Admin only; enabled only when the device is stale or all its connected links are down. Requires confirmation. |
Right-click a link for:
| Action | Description |
|---|---|
| Inspect link | Open the Link Detail Panel |
| Show events | Open Activity Tray filtered to this link |
| View timeline | Open chronological event history for this link |
| Delete link | Permanently remove the link. Admin only; enabled only when the link is down or stale. Requires confirmation. |
SPF Path Visualization
To visualize the shortest path between two devices:
- Right-click the source device and select Set as route source (or click the Route Src button in the Node Detail Panel).
- Right-click the destination and select Set as route destination (or click the Route Dst button).
- The path is highlighted on the canvas. A Route Path overlay appears in the top-left corner of the canvas showing:
- Source device (green dot) and destination device (red dot)
- Forward path total cost and hop count (orange)
- Reverse path total cost and hop count (blue)
- An "Asymmetric routing detected" warning if forward and reverse costs differ
- A Clear button to remove the path visualization
You can set source and destination independently -- the path is computed automatically once both are set. Osprey computes the actual Dijkstra SPF on the server side, correctly handling asymmetric cost paths where the forward and reverse routes may traverse different links.
Protocol Toggle (Multi-Protocol Networks)
In networks running both OSPF and IS-IS, the Route Path panel includes a protocol toggle ([OSPF] [IS-IS]). The toggle auto-detects which protocols the source device participates in and disables unavailable options (grayed out with a tooltip explaining why). When you change the source device, the toggle automatically switches to a protocol the device supports. Path computation uses only areas belonging to the selected protocol.
IS-IS Address Family Selector
When the selected protocol is IS-IS, the Route Path panel replaces the protocol toggle with an address family selector: [CLNS] [IPv4] [IPv6]. The buttons are dynamic -- each is visible only if the IS-IS instance advertises that protocol capability (TLV 129), and enabled only if actual route data exists in the database for that AF.
- CLNS (default for IS-IS): Displays System IDs and full NET addresses per hop instead of Router IDs and interface IPs. The TracerouteTable uses a CLNS-specific layout: Hop, System ID, Level, Circuit, Cost. The prefix input is hidden (CLNS routing is purely topology-based). When all hops share the same level, the Level column is hidden and a summary line shows the level instead.
- IPv4: Standard traceroute with IPv4 interface addresses, same as OSPF.
- IPv6: Traceroute with IPv6 interface addresses from TLV 236 prefixes.
L1/L2 transition annotations appear at boundary hops when the path crosses between IS-IS levels.
The endpoint selector adapts to the selected AF: in CLNS mode, the interface IP picker is hidden (devices are identified by System ID only). In IPv4/IPv6 mode, the picker shows the relevant address family's interfaces.
SR-MPLS Label Stack
When viewing an IS-IS IPv4 or IPv6 path to a specific prefix, and the network has SR-MPLS data (Prefix SID, SRGB from TLV 242), the TracerouteTable shows an additional Label column. Each hop displays the computed MPLS label for that segment:
- Index mode: label = SRGB base + SID index (most common deployment)
- Absolute label: used directly when the V-flag is set on the Prefix SID
The Label column auto-hides when no SR-MPLS data is available for the path. SR-MPLS data is extracted from IS-IS TLV 242 (Router Capability), sub-TLV 3 (Prefix SID on TLV 135/236), and sub-TLV 31 (Adjacency SID on TLV 22).
Failed Path Diagnostics
When no path can be found between two devices, the Route Path panel displays a step-by-step explanation of why the path failed instead of just "No path found". The diagnosis covers:
- Shared area analysis — whether source and destination share any areas
- Backbone reachability — whether both devices can reach the backbone (area 0.0.0.0 or Level 2)
- Entry/exit ABR analysis — which ABRs connect each device's areas to the backbone
- Area membership mismatches — when devices exist in completely separate areas with no inter-area path
Forward path failures are shown in orange; reverse path failures in blue.
When forward and reverse costs are equal, the panel shows whether the routes are congruent (same links in both directions) or different (same cost but traversing different links, highlighted in yellow).
Endpoint Address Selection
Each endpoint (source and destination) in the Route Path panel has a chevron button. Click it to expand an interface address picker showing the device's interfaces grouped into Loopback and Transit / P2P sections. Select a specific interface IP to compute the path to/from that address, or choose Any (device) to use the device itself. When a source address is IPv4, the destination picker filters out IPv6 addresses (and vice versa).
Route Explanation
Each direction (forward and reverse) in the Route Path panel includes an Explain button. Clicking it expands a numbered explanation of the routing decision:
- OSPF Intra-area (O): Identifies the shared area and SPF cost.
- OSPF Inter-area (O IA): Shows the backbone transit path and ABR transitions with per-segment costs.
- OSPF External (E1/E2): Explains the LSA type, ASBR, external metric, and forwarding address if applicable.
- IS-IS L1 (I L1): Intra-level route within Level 1. Uses I L1 badge and IS-IS hostnames in explanation.
- IS-IS L2 (I L2): Intra-level route within Level 2. Uses I L2 badge. Explanation references "level-2 backbone".
- IS-IS L1->L2 / L2->L1: Inter-level route showing L1/L2 router transition.
- ECMP note: When multiple equal-cost paths exist, the explanation notes the count.
The OSPF Routing Table (right-click > Show routing table) also supports explanations: click any route row with a triangle indicator to expand inline derivation steps showing how the route's metric was computed.
SPF Tree
Right-click any device and select SPF tree from here to view the Dijkstra shortest-path tree rooted at that device within its OSPF area.
- Single-area devices: Clicking the menu item opens the tree directly.
- ABR / multi-area / L1L2 devices: A submenu appears listing each area or level the device belongs to. Select which area's SPF tree to view. In mixed OSPFv2+v3 deployments where the same dotted-quad area exists in both protocols, a suffix disambiguates: "Area 0.0.0.0 (v2)" vs "Area 0.0.0.0 (v3)". For IS-IS L1/L2 routers, the submenu shows "Level 1" and "Level 2".
- When viewing a single area in the sidebar: Always opens directly (the area is already determined).
The SPF Tree panel shows:
| Element | Description |
|---|---|
| Header | "SPF Tree" with area label badge, root device name, node count, and max cost |
| Expand/Collapse | "Expand all" and "Collapse all" controls |
| Tree rows | Indented tree with CSS connector lines. Each row shows: device name (hostname or router ID), ABR badge (blue), ASBR badge (amber), and collapsed child count hint |
| Cost columns | Link cost (+N for each hop), total cumulative cost, and a proportional cost bar |
Click any device in the tree to highlight it on the canvas. The tree is time-travel aware -- when viewing historical topology, the SPF tree is computed from the historical snapshot.
7. Reports
Access all reports from the Reports menu in the menu bar. The menu is organized into submenus: Inventory, Routing, and Diagnostics, plus top-level entries for Change Summary, Topology Diff, and more.
Protocol-aware filtering: Seven OSPF-only reports (LSDB Browser, Neighbor Table, Inter-Area Routes, External Routes, IP Conflicts, Timer Consistency, Best Practices) are automatically disabled when only IS-IS areas are checked in the sidebar. Disabled reports show a tooltip explaining why they are unavailable (e.g., "OSPF only -- IS-IS uses LSPs, not LSAs"). When both OSPF and IS-IS areas are checked, all reports are available.
Most table-based report panels share these common features:
- Search: Real-time text filtering across all visible columns. Supports two modes:
- Text mode (default): Case-insensitive substring match. Behaves like a standard search box.
- Regex mode: Click the
.*button next to the search input to toggle regex mode (the button highlights when active). Use regular expression patterns for advanced filtering:gw|cr-- match rows containing "gw" OR "cr"^10\.-- match rows starting with "10."\d{3}-- match rows containing three consecutive digits0\.0\.0\.0-- match literal IP address (dots escaped)
- Invalid regex patterns (e.g.,
[unclosed) show a red border on the input and fall back to literal substring match, so typing is never broken. - The regex toggle is hidden on panels that use server-side search (L2 Neighbours, and Prefixes/External Routes/Inter-Area Routes in live mode), since the server does not support regex. In time-travel (historical) mode, these panels switch to client-side filtering and the toggle appears.
- Column configuration: Click the gear icon to show/hide columns and drag-to-reorder.
- Sorting: Click any column header to sort ascending, click again for descending.
- CSV export: Download the current (filtered, sorted) view as CSV.
- Resizable panel: Drag any edge or corner handle to resize. Minimize to the pill bar to temporarily dismiss without losing scroll position or column configuration.
Column visibility and order persist across sessions via user settings.
Inventory Reports
Routers
Reports > Inventory > Routers
Lists all discovered routers (OSPF and IS-IS). When IS-IS areas are checked, an additional System ID column appears. CSV export filename: routers.csv.
| Column | Default Visible | Description |
|---|---|---|
| Router ID | Yes | OSPF router ID or IS-IS system ID |
| System ID | Yes (IS-IS) | IS-IS system ID (XXXX.XXXX.XXXX); shown when IS-IS areas are included |
| Name | Yes | Display name (per system name mode) |
| Vendor | Yes | SNMP sysDescr-derived vendor |
| Model | Yes | Device model |
| Platform | Yes | Hardware platform |
| Version | Yes | Software version |
| Role | Yes | ABR, ASBR, and/or Collector badges |
| Areas | Yes | OSPF area or IS-IS level memberships |
| Device Type | Yes | Router, switch, firewall, etc. |
| First Seen | No | First discovery timestamp |
| Last Seen | No | Most recent update timestamp |
Links
Reports > Inventory > Links
Lists all IGP adjacencies (OSPF and IS-IS). When IS-IS areas are checked, additional System ID columns appear and link type shows "broadcast" instead of "transit". CSV export filename: links.csv.
| Column | Default Visible | Description |
|---|---|---|
| Source | Yes | Source device display name |
| Target | Yes | Target device display name |
| Source IP | Yes | Source interface IP address |
| Target IP | Yes | Target interface IP address |
| Cost (fwd) | Yes | Forward IGP cost (OSPF cost or IS-IS metric) |
| Cost (rev) | Yes | Reverse IGP cost |
| State | Yes | Link state with color badge (green=up, red=down) |
| Source RID | No | Source router ID |
| Target RID | No | Target router ID |
| Source System ID | No (IS-IS) | Source IS-IS system ID; shown when IS-IS areas are included |
| Target System ID | No (IS-IS) | Target IS-IS system ID; shown when IS-IS areas are included |
| Source Interface | No | Source interface name |
| Target Interface | No | Target interface name |
| Speed | No | Link speed |
| Type | No | Link type (P2P, broadcast for IS-IS, transit for OSPF) |
| Area | No | OSPF area ID or IS-IS level |
Interfaces
Reports > Inventory > Interfaces
Lists all router interfaces. When IS-IS areas are checked, additional IS-IS timer columns appear. CSV export filename: interfaces.csv.
| Column | Default Visible | Description |
|---|---|---|
| Router | Yes | Router ID |
| Hostname | Yes | Device display name |
| Interface | Yes | Interface short name (e.g., Gi0/0/1) |
| IP Address | Yes | Interface IP |
| DNS Name | Yes | Reverse DNS (PTR) for the interface IP |
| Mask | Yes | Subnet mask |
| Peer | Yes | Peer router ID |
| Cost | Yes | IGP interface cost/metric |
| Type | Yes | Link type |
| Speed | No | Interface speed |
| Description | No | ifAlias description |
| Full Name | No | Full interface name (ifDescr) |
| Hello Int. | No | OSPF hello interval (seconds) |
| Dead Int. | No | OSPF dead interval (seconds) |
| Auth Type | No | OSPF authentication type |
| Net Type | No | OSPF network type |
| IS-IS Hello | No (IS-IS) | IS-IS hello interval; shown when IS-IS areas are included |
| IS-IS Hold | No (IS-IS) | IS-IS hold time; shown when IS-IS areas are included |
| IS-IS Metric | No (IS-IS) | IS-IS interface metric; shown when IS-IS areas are included |
| Circuit Type | No (IS-IS) | IS-IS circuit type; shown when IS-IS areas are included |
| Level | No (IS-IS) | IS-IS interface level; shown when IS-IS areas are included |
| First Seen | No | First discovery timestamp |
| Last Seen | No | Most recent update timestamp |
Software Versions
Reports > Inventory > Software Versions
A specialized report (not a standard table) that groups devices by software attribute for fleet-wide analysis.
- Group-by selector: Choose how to group: by Software Version, Vendor, Platform, or Vendor + Version combination.
- Summary bar: Shows total device count and number of distinct groups.
- Group rows: Each row shows the group name, device count, and a percentage distribution bar. The group with the fewest devices is highlighted with a "rarest" badge.
- Expandable rows: Click any group to expand and see the individual devices in that group with their router ID, hostname, and additional details.
- Search filter: Filter groups by name.
- CSV export: Export the grouped data as
software-versions.csv.
CDP/LLDP Neighbours
Reports > Inventory > CDP/LLDP Neighbours
Lists all LLDP and CDP neighbor adjacencies discovered via SNMP across the network. Neighbours with only telephone capability (0x20) are excluded (IP phones, cameras).
- Summary bar: Total neighbour count, unique remote devices, LLDP count, CDP count.
- Columns: Local Device, Local Port, Remote Device, Remote Port, Mgmt IP, Protocol (LLDP/CDP/LLDP+CDP badge), Capabilities, Chassis ID, Platform, Software Version, Native VLAN, First Seen, Last Seen.
- Software Version: Extracted from CDP (
cdpCacheVersion) or LLDP (lldpRemSysDesc). Long version strings are truncated at "Technical Support:" for readability. - Native VLAN: Extracted from CDP (
cdpCacheNativeVLAN) when available. - Protocol merging: When both LLDP and CDP discover the same neighbor, the record shows "LLDP+CDP" and combines data from both protocols.
- Column configuration: Toggle column visibility, drag to reorder, resize. Settings are persisted in your user profile.
- Search: Free-text filter across all visible columns.
- Sort: Click any column header to sort ascending/descending.
- CSV export: Export visible data as
l2-neighbors.csv. - Hide entries: Click the eye icon on any row to hide it from the report and the L2 canvas overlay. Hidden entries are excluded from SNMP-driven L2 topology views. The hide flag persists across discovery cycles — new polls do not reset it. To see hidden entries, check the "Hidden (N)" checkbox in the toolbar. Click the eye icon again to unhide.
Data source: GET /api/v1/l2-neighbors/network/{networkID}. Requires L2 enrichment to be enabled in SNMP settings.
Routing Reports
Prefixes
Reports > Routing > Prefixes
Lists stub networks (OSPF connected subnets and IS-IS IP reachability prefixes). CSV export filename: prefixes.csv.
Columns: Network, Cost, Router, First Seen, Last Seen.
Inter-Area Routes
Reports > Routing > Inter-Area Routes
Shows Type 3 LSA summary routes learned via ABRs. CSV export filename: inter-area-routes.csv.
Columns: Network, Metric, Advertising Router, Router, First Seen, Last Seen.
External Routes
Reports > Routing > External Routes
Shows Type 5 and Type 7 external routes. CSV export filename: external-routes.csv.
Columns: Network, Type (E1/E2 badge), Metric, Advertising Router, Router, Forward Address, NSSA (amber badge for Type 7 routes), Tag, First Seen, Last Seen.
LSDB Browser
Reports > Routing > LSDB Browser
A raw view of the link-state database. This is a specialized panel (not a standard table). The content adapts based on whether you are viewing an OSPF or IS-IS protocol instance.
OSPF mode (6 tabs): Router (Type 1), Network (Type 2), Summary (Type 3), ASBR (Type 4), External (Type 5), NSSA (Type 7).
IS-IS mode: Shows LSP entries organized by level. Each LSP row displays: System ID, LSP number, sequence number, remaining lifetime, and flags (overload bit, attached bit). Expandable rows show decoded TLV contents.
Common features:
- Summary badges: Each tab header shows a count badge with the number of LSAs/LSPs of that type.
- Expandable rows: Click any LSA/LSP row to see its full decoded content.
- Search filter: Filter by advertising router/system ID, link ID, or content.
- Auto-refresh toggle: Enable to automatically refresh the LSDB view every 30 seconds when topology updates occur. A manual Refresh button is also available.
Note: Live LSDB header fields (Age, Sequence Number, Checksum, Options) are persisted by the engine from collector snapshots. If these fields are not yet available (e.g., on fresh installations), a notice is displayed in the summary bar.
Neighbor Table
Reports > Routing > Neighbor Table
Shows IGP adjacencies (OSPF or IS-IS) per device. Also accessible via right-click > Show neighbors on any device.
- Device selector dropdown: Choose a specific device to view its neighbors, or view all devices.
- Summary bar: Shows total neighbor count with up/down breakdown (e.g., "12 neighbors: 11 up, 1 down").
- 9 displayed columns: Neighbor ID (with ABR/ASBR flag badges), Hostname, State (green/red dot with FULL/DOWN label), Address (local and remote IPs), Interface name, Cost (single value for symmetric, with forward/reverse arrows for asymmetric costs), Type (P2P, broadcast), Area, Last Seen.
- CSV export: Exports 14 columns (including additional fields) as
neighbors.csv.
BGP Peers
Reports > Routing > BGP Peers
Lists all BGP peers discovered via BMP (BGP Monitoring Protocol). Requires at least one BMP target configured via the API -- see Setting Up BGP Monitoring.
| Column | Description |
|---|---|
| Peer IP | Remote BGP peer IP address |
| AS | Remote peer ASN |
| Type | Internal (iBGP) or external (eBGP) |
| AF | Address family (IPv4 / IPv6) |
| State | Peer state with colored dot (green = established, red = down) |
| Prefixes | Number of prefixes received from this peer |
| Router ID | Peer's BGP router ID |
| Last Seen | Most recent update from this peer |
- Search: Free-text filter across all columns.
- Pagination: Server-side, with configurable page size.
- CSV export: Download filtered data as
bgp-peers.csv.
BGP Routes
Reports > Routing > BGP Routes
Searches the BGP best-path table (engine-computed from BMP data). Supports three prefix match modes for flexible route analysis.
- Match modes (selector next to the search field):
- Exact: Returns only the prefix that exactly matches your search (e.g.,
10.0.0.0/24returns only10.0.0.0/24). - Longest match: Returns the most specific (longest prefix length) matching entry, like a router's forwarding table lookup.
- Covered: Returns all prefixes that fall within the searched prefix (e.g.,
10.0.0.0/8returns10.0.0.0/24,10.1.0.0/16, etc.).
- Exact: Returns only the prefix that exactly matches your search (e.g.,
| Column | Description |
|---|---|
| Prefix | BGP route prefix (CIDR notation) |
| Next Hop | BGP next-hop IP address |
| AS Path | Full AS path string |
| Origin AS | Originating ASN |
| LOCAL_PREF | BGP local preference value |
| MED | Multi-Exit Discriminator |
| Source | How the best path was determined: loc_rib (from router's own LocRIB via BMP) or inferred (engine-computed best-path selection) |
| ECMP | Equal-cost multipath count -- the number of next-hops that tied across all 6 best-path comparison steps |
| Updated | Timestamp of last best-path change |
- Pagination: Server-side with offset capped at 100,000 to prevent slow queries on large RIBs. Use prefix/AS filters to narrow results.
- CSV export: Download filtered data as
bgp-routes.csv.
Tip: ECMP count > 1 means multiple BGP next-hops had equal LOCAL_PREF, AS path length, origin, MED (for same-origin AS), and IGP cost. This is useful for verifying load-balancing behavior.
Diagnostic Reports
Topology Health
Reports > Diagnostics > Topology Health
Client-side analysis that scans the current topology for issues. Results are grouped into 5 categories, each collapsible:
- Down Links (error severity): Links in a non-operational state.
- Asymmetric Costs (warning severity): Links where forward and reverse IGP costs differ.
- Isolated Devices (error severity): Routers with no active adjacencies (excludes collector nodes).
- Single-Homed Routers (info severity): Devices with only one link (potential SPOFs).
- ABR Anomalies (warning severity): ABRs with unexpected area memberships.
A summary bar at the top shows error/warning/info counts. Each finding is clickable -- selecting it highlights the affected device or link on the canvas. Device names respect the system-wide Device Name Format setting (Admin > System Settings > Display) -- showing hostnames, DNS names, router IDs, or hostname+IP depending on your configuration. CSV export filename: topology-diagnostics.csv.
IP Conflicts
Reports > Diagnostics > IP Conflicts
Detects four categories of IP address conflicts, displayed in tabs:
- Duplicate Router IDs: Multiple devices using the same OSPF router ID or IS-IS system ID.
- Duplicate IPs: Multiple interfaces on different devices with the same IP address.
- Duplicate Prefixes: The same stub network advertised by 3+ devices (normal for 2 on point-to-point links).
- External Conflicts: External routes for the same prefix with different metrics or types from different ASBRs.
Each tab shows severity badges (critical, warning). Click any finding to highlight it on the canvas. Device names respect the system-wide display name mode setting. CSV export filename: ip-conflicts.csv.
Single Points of Failure
Reports > Diagnostics > Single Points of Failure
Identifies network reliability risks using graph theory, displayed in 2 tabs:
- Articulation Points: Devices whose failure would partition the network. Shows how many disconnected segments would result.
- Bridge Links: Links whose failure would partition the network.
Severity levels: critical (red) for high-impact SPOFs affecting 3+ segments, warning (yellow) for lower impact (2 segments). Results are sorted by impact descending. Click any finding to highlight it on the canvas. Device names respect the system-wide display name mode setting. CSV export filename: spof-report.csv.
Routing Stability
Reports > Diagnostics > Routing Stability
Analyzes topology event patterns to identify flapping links, unstable devices, and area-level instability. Displayed in 3 tabs:
- Flapping Links: Links with frequent state changes. Severity-coded by transition count: critical (10+ transitions, red), warning (5-9, yellow), info (2-4, blue).
- Unstable Routers: Devices with high event counts. Severity: critical (50+ events), warning (20-49), info (fewer than 20).
- Area Scores: Per-area instability scoring based on event density. Each area receives a numeric stability score derived from the total event count relative to the number of devices and links in that area. Higher scores indicate more volatile areas that may warrant investigation. Areas are sorted by score descending.
Period selector: 1 hour, 6 hours, 24 hours, or 7 days. CSV export filename: stability-report.csv.
Congestion Trend
Reports > Diagnostics > Congestion Trend
Identifies interfaces with sustained high utilization over configurable periods. Requires SNMP targets configured and hourly utilization bucketing data.
- Period selector: 24 hours, 7 days, or 30 days.
- Threshold selector: 60%, 70%, 80%, or 90% utilization.
| Column | Description |
|---|---|
| Device | Device display name |
| Interface | Interface name |
| Speed | Interface speed |
| Max % | Peak utilization in the period |
| Avg % | Average utilization in the period |
| Hours Above | Hours above the selected threshold |
| Trend | Direction indicator (increasing, stable, or decreasing) |
| History | Inline sparkline chart showing utilization over time |
CSV export filename: congestion-trend.csv.
Timer Consistency
Reports > Diagnostics > Timer Consistency
Detects IGP timer and configuration mismatches across link endpoints. Supports both OSPF and IS-IS links. Requires the SNMP poller to have walked the OSPF-MIB or ISIS-MIB interface table.
- Summary bar: Shows total endpoints checked and mismatch count.
- Mismatches table with 5 columns: Parameter (with severity badge), Side A (device name), Value A, Side B (device name), Value B.
Severity levels:
- Critical: Hello interval, dead/hold interval, or authentication type mismatches (these prevent adjacency formation).
- Warning: Network type mismatches (OSPF) or metric type mismatches (IS-IS narrow vs. wide).
IS-IS-specific checks include hello interval, hold time, and metric width consistency. Best practice warnings are raised for IS-IS links still using narrow metrics (max 63).
Click any finding to highlight the affected link on the canvas. CSV export filename: timer-consistency.csv.
MTU Mismatch
Reports > Diagnostics > MTU Mismatch
Detects MTU mismatches between interfaces on opposite ends of a link. Requires the SNMP poller to have walked IF-MIB ifMtu (.1.3.6.1.2.1.2.2.1.4), which is collected during the discovery interval (default 6 hours, configurable via Admin > System Settings > SNMP > Polling).
- Summary bar: Shows count of mismatched links.
- Mismatches table with 4 columns: Side A (device + interface + IP), MTU A, Side B, MTU B.
- Click any row to highlight the affected link on the canvas.
- CSV export filename:
mtu-mismatch-report.csv.
MTU mismatches are common on links between routers using different default MTUs (e.g., jumbo frames on one side, 1500 on the other). While OSPF uses interface MTU from the DBD exchange to detect this at adjacency formation, the MTU Mismatch report provides a topology-wide overview without requiring a GRE collector.
The MTU value also appears in the Link Detail Panel OSPF Configuration section and as a hidden column in Reports > Inventory > Interfaces (toggle via column picker).
Best Practices
Reports > Diagnostics > Best Practices
OSPF-only. Audits the topology for compliance with OSPF design best practices. Findings are grouped into categories:
- Backbone Design: Checks for non-contiguous backbone (area 0), ABRs not connected to area 0, etc.
- Stub Compliance: Verifies stub and NSSA area configurations.
- Router ID: Detects non-loopback router IDs and duplicate router IDs.
- Passive Interfaces: Flags stub networks on non-passive interfaces.
Each finding has a severity level (critical, warning, info) and lists the affected devices. Click any device to highlight it on the canvas. CSV export filename: best-practices.csv.
Note: IS-IS best practices checks are not yet available. When only IS-IS areas are selected, this report is disabled.
Change Summary
Reports > Change Summary
Shows a summary of recent topology changes with visual distribution analysis.
- Period selector: 1 hour, 6 hours, 24 hours, 7 days, or 30 days.
- Summary cards: Device changes (added/removed/changed counts), Link changes, and Stub Network changes with color-coded badges.
- Distribution chart: Visual breakdown of change types over the selected period.
- Other Events section: Lists topology events that don't fall into the device/link/stub categories.
Topology Diff
Reports > Topology Diff
Compare topology at two points in time:
- Set the From and To timestamps using datetime pickers (defaults to last 24 hours).
- Click Compare.
- Results show in 3 tabs: Devices, Links, Stub Networks.
- Each entry shows whether it was added (green), removed (red), or changed (amber) with summary badges per tab.
- Click any entry to highlight it on the canvas.
CSV export filename: topology-diff.csv. Useful for change review, maintenance window validation, and troubleshooting.
Dependency Impact
Reports > Dependency Impact
Analyzes upstream/downstream dependency relationships and failure impact across the topology. The report has three tabs:
- Peer AS: Summarizes BGP autonomous system dependencies. Shows each peer AS with prefix count, percentage of total prefixes, and a no-alternative ratio indicating how many prefixes are reachable only through that peer. Requires BMP targets configured with AS-scoped data. Expand any row to see the list of affected prefixes and alternate paths.
- Critical Pairs: Identifies device pairs whose simultaneous failure would partition the network or isolate significant portions of the topology. Each pair shows the number of affected devices and links. Expand any row to see the detailed failure impact (disconnected segments, traffic rerouting).
- SRLG / Devices: Lists SRLG groups and individual devices with failure impact analysis. Filter between SRLG groups, individual devices, or all. Each entry shows the number of affected prefixes, links, and devices if that entity fails. Expand any row to see detailed downstream impact.
Each tab supports expandable drill-down rows. Clicking a row header fetches impact detail lazily via a POST request, so large topologies remain responsive.
8. Tools
Search (Ctrl+K)
Tools > Search or press Ctrl+K / Cmd+K.
Opens a search dialog for quickly locating any device or interface. Type to search across:
- Router ID, Hostname, DNS Name, Label, Vendor (device fields)
- Interface name, IP address, interface description (interface fields)
- CIDR subnet containment (e.g., entering
192.168.1.0/24finds all interfaces in that subnet)
Results are grouped into Devices and Interfaces sections with match highlighting. Use arrow keys to navigate results, Enter to select. The selected device is highlighted on the canvas and its Node Detail Panel opens.
Time Travel
Tools > Time Travel
View the network topology at any historical point:
- Activate Time Travel to show the time scrubber bar at the bottom of the screen.
- The bar has transport controls: skip backward/forward, step, play/pause.
- Adjust playback speed (0.5x to 10x).
- Drag the scrubber slider to any point in time.
- The topology canvas reconstructs the historical state from snapshots.
- An amber tint on the bar indicates you're viewing historical data.
- Click the green Live button to return to real-time.
Keyboard shortcuts while in Time Travel mode: Space (play/pause), Left arrow (step backward), Right arrow (step forward).
Time travel uses hash-deduplicated per-area JSONB snapshots, so only actual changes consume storage.
When time travel is active, the Node Detail Panel and Link Detail Panel adapt automatically:
- An amber banner shows "Viewing at [timestamp]" at the top.
- Critical state banners (device down / link down) reflect the historical topology state at the playback timestamp.
- Live traffic polling and boost are suppressed.
- The Events section filters to ±1 hour around the playback timestamp.
- Utilization charts show a vertical highlight marker at the playback time.
Combined with Engineering Mode: Time Travel and Engineering Mode can be used simultaneously. Activate time-travel from within simulation (clock icon in the engineering mode bar) or enter simulation while already in time-travel. The playback controls appear below the engineering mode bar. Advancing through snapshots during playback automatically re-evaluates SPF with the current mutations applied to each historical snapshot (debounced at 500ms for fast playback). Use the close button on the historical timestamp badge to exit time-travel without leaving simulation.
Refresh DNS
Tools > Refresh DNS
Forces a full re-resolution of all PTR (reverse DNS) records for device router IDs and interface IP addresses. Use this when:
- DNS records have been updated (renumbering, device replacement)
- New PTR records have been added
- You want to sync DNS names after zone changes
The refresh happens asynchronously -- the engine clears its DNS cache and re-resolves all cached IPs. Results appear within seconds.
Engineering Mode (What-If Simulation)
Tools > Engineering Mode, or click the Simulate button in the toolbar.
Engineering Mode lets you model failure scenarios, cost changes, hypothetical links, and shared risk link group (SRLG) failures on the live topology without affecting the real network. All SPF computation happens server-side using RFC 2328-correct multi-area Dijkstra for OSPF (with inter-area routing via backbone ABRs) and ISO 10589-correct multi-level Dijkstra for IS-IS (with inter-level routing via L1/L2 routers) -- no changes are written to the database.
Entering simulation mode:
When activated, the canvas displays an amber overlay border and a "SIMULATION" watermark to clearly distinguish the simulated view from the live topology. The normal top bar is replaced by the EngineeringModeBar, which provides simulation controls: Live/Simulated view toggle, Run SPF button (with dirty indicator), Auto checkbox for automatic recomputation, Undo/Redo, Assessment (batch failure analysis), and Exit.
Tip: Engineering Mode and Time Travel can be used simultaneously. Click the clock icon in the engineering mode bar to activate time-travel, or enter Engineering Mode while already viewing historical topology. Simulation results automatically recompute as you advance through snapshots during playback.
Simulating failures and cost changes:
- Right-click a link and select Simulate Failure to take the link down in the simulation, or Change Cost... to open the CostChangePopover for inline OSPF cost editing on that edge.
- Right-click a node and select Simulate Node Failure to take the device and all its links down.
Hypothetical routers:
- Right-click empty canvas space and select Add Hypothetical Router... to place a synthetic device on the topology.
- Enter a display name (e.g. "new-core-rtr"). The node appears with a dashed cyan border to distinguish it from real devices.
- The hypothetical router has no area membership at creation — it joins areas when you connect hypothetical links to it, matching real OSPF behavior where area membership is determined by interfaces, not the router.
- If you connect a hypothetical router to devices in two or more areas, it is automatically classified as an ABR.
- Right-click a hypothetical router to remove it (all connected hypothetical links are cascade-deleted) or to add links/simulate failures.
Hypothetical links:
- Right-click a node (real or hypothetical) and select Add Hypothetical Link... to create a synthetic link between two devices.
- Click the source device on the canvas, then click the target device.
- Select the OSPF area for the link. If both devices share exactly one common area, it is auto-selected. If they share multiple areas, choose from a dropdown (shows area ID and protocol). If no common area exists, all areas from both devices are offered with a warning that the other device will be added to the selected area.
- Set the OSPF cost (symmetric by default, or set forward/reverse independently).
- Hypothetical links appear as dashed green edges on the canvas and are included in SPF computation.
SRLG (Shared Risk Link Group):
- Right-click a node and select Start SRLG Group... to begin defining a group.
- While the SRLG panel is open, right-click additional nodes or links and select Add to SRLG Group to accumulate members.
- Name the group and click Create SRLG. The group appears as a single mutation in the Mutations tab, but all members fail simultaneously during SPF computation.
- Load Saved SRLG: Click the "Load Saved SRLG" dropdown to populate members from a persistent SRLG group. Saved SRLGs are managed via Admin > SRLG Manager (network-scoped, stores link memberships in the database). When a saved SRLG is applied, the server expands it to individual link failures automatically.
SRLG Manager (Admin modal):
Manage persistent SRLG definitions that can be reused across simulation sessions. Open via the Admin menu. Select a network, then create/edit/delete SRLG groups with named link members. Each SRLG is scoped to a single network and stores its member links in the database. These groups appear in the "Load Saved SRLG" dropdown during Engineering Mode.
SimulationPanel (right side, 380px):
The simulation panel has 3 tabs:
| Tab | Purpose |
|---|---|
| Mutations | Lists all simulated changes (failures, cost overrides, hypothetical routers, hypothetical links, SRLG groups). Add, remove, or toggle individual mutations. Undo/Redo support for iterative exploration. SRLG entries show member count. Includes Scenario Manager for saving and loading named mutation sets (see below). |
| Impact | Shows traffic redistribution: before/after utilization per link, congestion risk badges (ok < 80%, warning 80-95%, critical >= 95%). Also shows isolated devices and affected links. |
| Paths | Before/after SPF path comparison for all affected device pairs: cost, route type, and hop-by-hop path. |
Batch failure assessment:
Click the Assessment button in the EngineeringModeBar to open the batch assessment panel. This iterates every link or every node in the topology, simulates individual failure for each, and shows which devices become unreachable (isolated from the network).
Only failures that cause actual device isolation are shown — redundant links/nodes whose failure has no isolation impact are filtered out. Click any row to expand it and see the names of the unreachable devices.
Results are displayed in a sortable table. Click column headers to sort by devices lost or entity name. Use Export CSV to download the full assessment for offline analysis.
Scenarios (save/load):
Engineering Mode supports saving mutation sets as named scenarios for reuse:
- Click the Save button in the EngineeringModeBar to save the current mutations. If a scenario is already loaded, it overwrites that scenario; otherwise it prompts for a new name.
- Click the folder icon to open the Scenario List, which shows all saved scenarios. Click any scenario to load its mutations onto the canvas.
- Scenarios are stored per-user and scoped to the protocol instance or network.
- Use scenarios to preserve pre-validated maintenance plans, compare alternative failure mitigation strategies, or share standard test cases across sessions.
Workflow example:
- Click Simulate in the toolbar.
- Right-click a core link and select Simulate Failure.
- Open the Impact tab to see traffic redistribution and which devices become isolated.
- Open the Paths tab to compare original vs. new path costs for affected device pairs.
- Right-click another link and select Change Cost... to test a traffic engineering adjustment.
- Click Assessment to run a batch analysis of all links and identify the most critical failure points.
- Right-click a device and select Add Hypothetical Link... to test adding redundancy.
- Right-click empty canvas space and select Add Hypothetical Router... to model a planned new device. Connect it with hypothetical links to evaluate the routing impact.
- Use Undo in the Mutations tab to step back through changes.
- Click Exit to return to the live topology.
Tip: Engineering Mode is useful for pre-validating maintenance plans. Before shutting down a link for maintenance, simulate the failure to verify that all devices remain reachable, that traffic redistribution stays within capacity, and that no new single points of failure are created. Use the batch assessment to identify the network's most critical links before they become a problem. For incident post-mortems, combine Engineering Mode with Time Travel to replay a past outage and test whether proposed topology changes would have prevented the impact.
Topology Export
Osprey supports three export formats for the topology canvas.
PNG Export
Topology > Export as PNG
Exports the current canvas view as a raster image:
- All visible nodes, edges, labels, and area boundary hulls are included.
- The background uses the active theme color (dark or light).
- If the grid is enabled (View > Grid), the grid pattern is rendered in the export.
- Area boundaries (colored convex hulls) are included if area coloring is active.
- Output resolution matches the canvas viewport.
SVG Export
Topology > Export as SVG
Exports the canvas as a scalable vector graphic:
- Same content as PNG (nodes, edges, labels, area boundaries, grid).
- Vector output — scales to any size without loss of quality.
- Theme-aware: background and element colors match the active theme.
- Suitable for embedding in reports, presentations, or printing.
Visio Export
Topology > Export as Visio
Generates a native Microsoft Visio (.vsdx) file from the topology data:
- Server-side rendering using node positions from the canvas.
- A4 landscape page with title block and legend.
- Icon packs: Builtin (Clarity/Cisco) and imported Visio stencil packs are supported. The export uses the active icon pack from the toolbar.
- Per-device overrides: If you assigned a custom icon to a specific device (right-click > Change icon), that override is rendered in the Visio file.
- Per-type defaults: Device-type role mappings from the Icon Library (e.g. all routers use a specific stencil icon) are respected.
- Priority: Per-device override > per-type default > active pack fallback.
- Imported vendor stencil icons render as-is (multi-color), matching the canvas behavior.
- Hidden nodes are excluded from the export.
- Requires at least one area selected in the sidebar.
Selection Export
Select one or more nodes on the canvas (shift-click or box-drag), then right-click to open the context menu. The Export N selected submenu offers:
- PNG — Raster image of only the selected nodes and their mutual edges. Non-selected elements are excluded; the bounding box fits the selection.
- SVG — Vector graphic of the selection. Same filtering as PNG: only selected nodes, connecting edges, and relevant area hulls.
- Visio — Server-side .vsdx containing only the selected devices and their mutual links. Useful for extracting a subnet or site from a larger topology into a Visio diagram.
Both L3 (router) and L2 (switch) nodes can be selected and exported together. Edges that connect a selected node to a non-selected node are excluded.
9. Alerts & Incidents
Alert System
Osprey monitors the network and fires alerts when conditions are met. Access via the Alerts menu in the menu bar.
Built-in Alert Rules
One system alert rule is seeded by default:
- SNMP Target Failure: Auto-disables targets after 10 consecutive poll failures (configurable via Admin > System Settings > SNMP > Polling > Consecutive Failure Limit).
Create additional alert rules under Admin > Monitoring > Alert Rules. The Alert Rule Manager provides templates for common rules (congestion warning/critical thresholds, interface error rate) that you can add with one click. You can also create custom rules from scratch.
Alert Bell
The bell icon in the top-right header shows the count of firing alerts (displays "9+" when more than 9 are active). Click the bell to open a dropdown panel showing up to 20 active alerts. Each alert in the dropdown displays:
- A severity-colored indicator dot (red for critical, yellow for warning, blue for info)
- Alert summary text
- Router ID (if applicable) and relative timestamp (e.g., "5m ago")
- An Ack button to acknowledge firing alerts directly from the dropdown
The dropdown closes when you click outside it or press Escape.
Managing Alerts
Alerts > Active Alerts: Opens the Activity Tray showing all currently firing and acknowledged alerts with severity badges. Acknowledge or resolve alerts from this view.
Alerts > Alert History: Opens the Activity Tray with the alerts tab for browsing current and past alerts.
Admin > Monitoring > Notification Channels (admin or engineer): Configure where alerts are sent:
- Webhook: POST JSON to any URL (custom headers supported)
- Email: SMTP delivery (STARTTLS auto-negotiation, implicit TLS on port 465, optional authentication)
- Slack: Incoming webhook with formatted messages (severity emoji, rule name, summary)
- Microsoft Teams: Incoming webhook with MessageCard format (color-coded severity)
- In-App: Delivered via the activity tray (no external configuration needed)
Each channel has a Test button that sends a real test notification to verify delivery. If delivery fails, the specific error is displayed (e.g., SMTP connection refused, HTTP 404).
Incident Correlation
Osprey's event correlation engine automatically groups related topology events into incidents. For example, if a core router goes down, the subsequent link failures on all its interfaces are grouped into a single incident with the device failure identified as the root cause.
Incidents appear in:
- The Active Incidents card on the dashboard.
- The Activity Tray (bottom-left), mixed with standalone events.
- IncidentCard widgets with severity-colored borders, collapsible child events, and clickable root cause.
Correlation uses a 30-second time window and prioritizes event types: device_down > node_failure > asbr_withdrawal.
Maintenance Windows
Admin > Monitoring > Maintenance Windows
Schedule maintenance periods to suppress alerts:
- Click Create Window.
- Set the start and end time.
- Choose the scope: global, specific network, routing domain, protocol instance, area, or device.
- Add a description.
During an active maintenance window, matching alerts are suppressed. The window appears with status indicators: scheduled, active, or expired.
10. SSH Terminal
Osprey provides a browser-based SSH terminal to connect directly to network devices.
Connecting
- Right-click a device on the canvas and select SSH to [device name].
- Or click the SSH button in the Node Detail Panel.
- Enter your device credentials (username/password).
- A terminal opens using xterm.js with full interactive support.
The SSH connection is proxied through the Osprey API server via WebSocket. The SSH proxy can be completely disabled by an administrator via Admin > System Settings > Security > Transport > Disable SSH Terminal Proxy. When disabled, the SSH context menu item and SSH button are hidden across the UI, and WebSocket connections are rejected at the API level.
Telnet fallback is available but disabled by default for security (telnet transmits credentials in cleartext). To enable it, set Allow Telnet Fallback in Admin > System Settings > Security. When enabled and SSH fails, the proxy falls back to telnet (port 23) with a warning displayed to the user.
Session Recording
If enabled by the administrator (set ssh_recording: true under the api section of osprey.yaml), SSH session output is recorded for audit purposes. Administrators can review recordings under Admin > Users & Security > Device Sessions. Session log retention is configurable in Admin > System Settings > Retention > SSH Session Logs (default: 90 days).
When OSPREY_ENCRYPTION_KEY is configured, session recordings are encrypted at rest using AES-256-GCM before storage. Legacy plaintext recordings are transparently decrypted on read.
Session History
Admin > Users & Security > Device Sessions
Browse all SSH sessions with:
- User, Target Device, Connected/Disconnected timestamps, Duration, IP Address
- Terminal-style log viewer for recorded sessions
- Search and CSV export
11. Administration
Admin-only features are under the Admin menu in the menu bar. The Admin menu is visible to users with admin or engineer roles. Engineers can access monitoring features (Alert Rules, SNMP Targets, Credential Profiles, Notification Channels, Maintenance Windows, Device Sessions) but cannot manage users, system settings, API keys, audit logs, or backups. Operator-role users cannot see the Admin menu.
Users & Security
User Management
Admin > Users & Security > Users
- Create users: Set username, password, display name, and role (admin, engineer, or operator).
- Edit users: Change display name, role, or active status.
- Reset passwords: Admin can reset any user's password.
- Delete users: Remove user accounts (cannot delete yourself).
Password complexity requirements are enforced per the settings in Admin > System Settings > Security (see System Settings below).
Login Sessions
Admin > Users & Security > Login Sessions
View currently logged-in users:
- Username, IP Address, User Agent, Last Activity
- Activity status: online (active), idle, offline
- Force Logout: Revoke individual sessions or all sessions for a user
- Auto-refreshes every 15 seconds
API Keys
Admin > Users & Security > API Keys
Create API keys for headless or automated access:
- Click Create Key.
- Set a name, role (admin/engineer/operator), and optional expiration.
- The raw key is shown once -- copy it immediately.
- Use the key via HTTP header:
X-API-Key: osprey_<key>
API keys bypass JWT authentication and are ideal for scripts, monitoring integrations, and CI/CD pipelines.
Device Sessions
Admin > Device Sessions
See SSH Terminal > Session History above. Note that Device Sessions is a top-level item under Admin, not nested under Users & Security.
Monitoring
Alert Rules
Admin > Monitoring > Alert Rules
Create, edit, enable/disable, and delete alert rules. System rules (e.g., SNMP Target Failure) can only be toggled, not edited or deleted. Templates for common traffic and error rules are available via the Add from Template button. See Alerts & Incidents for details on alert behavior.
SNMP Targets
Admin > Monitoring > SNMP Targets
Manage devices polled for traffic statistics:
- Add targets with IP, SNMP version, and credentials
- Auto-discover: Discover devices from seed IPs
- Monitor status: active, disabled, consecutive failures
- Targets auto-disable after a configurable number of consecutive failures (default 10, configurable in System Settings)
- Re-enable manually to resume polling
Credential Profiles
Admin > Monitoring > Credential Profiles
Create reusable SNMP credential templates:
- v2c profiles: Named community string
- v3 profiles: Username, authentication (MD5/SHA), privacy (DES/AES)
Profiles can be referenced by multiple SNMP targets and collectors, eliminating credential duplication.
Security: All SNMP credentials (community strings, v3 auth/priv passwords) are encrypted at rest in the database using AES-256-GCM when
OSPREY_ENCRYPTION_KEYis configured. The Debian installer generates this key automatically. API responses always mask credentials with***.
BMP Targets
BMP targets are managed directly from the sidebar. Create a BGP protocol instance (see Setting Up BGP Monitoring), then expand it in the sidebar to manage targets. Targets can also be managed via the REST API (/api/v1/bgp/targets).
Sidebar management:
- Hover over a BGP protocol instance in the sidebar and click + > Add Target.
- Enter a name, router IP, and RIB mode in the quick-add dialog.
- The target appears under the protocol instance with a status indicator.
- Right-click a target for options: View Peers, View Routes, Copy IP, or Delete.
- Toggle enable/disable with the inline toggle button.
RIB modes: loc_rib (router's best paths), adj_rib_in_post (all received routes post-policy), none (peer monitoring only).
Status indicators: Pending (awaiting connection), Connected (green, active session), Disconnected, Error (red).
When a target is deleted, all associated BGP peers and RIB data are automatically removed.
Note: The BMP server listens on TCP port 11019 by default. Configure this in
osprey.yamlunderbmp.listen_address. Thebmp.allowed_cidrssetting restricts which IPs can connect.
Notification Channels
Admin > Monitoring > Notification Channels
See Alerts & Incidents > Managing Alerts for channel type details (webhook, email, Slack, Teams, in-app).
Maintenance Windows
Admin > Monitoring > Maintenance Windows
See Alerts & Incidents > Maintenance Windows for setup details.
System Settings
Admin > System Settings
Runtime configuration in a tabbed layout with a vertical sidebar for section navigation (Display, Topology, Routing, Retention, SNMP, Security, License). Each tab shows a blue dot when it has unsaved changes. Settings are saved all at once with the Save button. The dialog warns you before discarding unsaved changes. A Reset to Defaults button at the bottom-left restores all factory defaults.
Display Section
- Device Name Format: How devices are labeled across the UI (events, alerts, incidents, diagnostics). Individual users can override this for the topology canvas via View > Node Labels. IS-IS devices display hostname from TLV 137 (Dynamic Hostname) as the primary label; the name fallback chain is: hostname > router_id > system_id.
hostname-- SNMP sysName or IS-IS Dynamic Hostname (TLV 137) (default)dns-- Reverse DNS (PTR) namerouter_id-- OSPF router ID or IS-IS system IDhostname_ip-- Hostname with router ID in parentheses
- OSPF Area ID Format: How OSPF area IDs are displayed throughout the application — sidebar, canvas, reports, alerts, and panels. This is purely visual; stored values always remain in dotted quad format. This setting does not affect IS-IS levels, which are always displayed as "Level 1" / "Level 2".
dotted_quad-- Standard dotted quad notation, e.g. 0.0.0.0, 0.0.0.1, 0.0.1.0 (default)decimal-- Decimal integer, e.g. 0, 1, 256. Shorter and often matches what is configured on routers (router ospf 1/area 0)
- IGP Area Coloring: When a device runs both OSPF and IS-IS simultaneously, this determines which protocol's area membership is used for area-based coloring on the canvas.
OSPF areas-- Color by OSPF area membership (default)IS-IS areas-- Color by IS-IS level membership
Topology Section
- Stale Device Retention (hours): How long unreachable devices/links remain visible before auto-deletion. Range: 1--8760. Default: 168 hours (7 days). A warning appears if set below 24 hours, as brief maintenance windows could trigger device removal.
Routing Section
Administrative distance (AD) determines which protocol's routes are preferred when multiple protocols advertise the same prefix. Lower values are preferred. These settings affect the Route Path computation and the per-router RIB view.
- OSPFv2: Administrative distance for OSPFv2 routes (range 1--255, default: 110). Applies to all OSPFv2 route types (intra-area, inter-area, external). Internal preference within OSPF (intra > inter > external) is handled by OSPF metric comparison, not AD.
- OSPFv3: Administrative distance for OSPFv3 (IPv6) routes (range 1--255, default: 110). Separate from OSPFv2 to allow independent tuning in dual-stack environments.
- IS-IS: Administrative distance for IS-IS routes (range 1--255, default: 115). Applies to both Level-1 and Level-2 routes.
- eBGP: Administrative distance for external BGP routes (range 1--255, default: 20). eBGP routes are learned from BMP peers in different autonomous systems.
- iBGP: Administrative distance for internal BGP routes (range 1--255, default: 200). A warning appears if iBGP AD is set lower than OSPFv2 AD, as this is unusual and would cause iBGP routes to be preferred over OSPF internal routes.
Example: With default values, a prefix advertised by both OSPF (AD 110) and eBGP (AD 20) will use the eBGP path. To prefer OSPF, set the OSPFv2 AD below 20 (e.g., 15).
Retention Section
- Event History: Days to keep topology events (range 1--365, default: 90).
- Topology Snapshots: Days to keep time-travel snapshots (range 1--365, default: 30).
- SSH Session Logs: Days to keep SSH/telnet session recordings (range 1--365, default: 90).
- Audit Log: Days to keep admin audit entries (range 1--365, default: 90).
- Alerts: Days to keep alert history (range 1--365, default: 90).
- Incidents: Days to keep incident records (range 1--365, default: 90).
- User Sessions: Days to keep user session records (range 1--365, default: 90).
Note: Utilization history retention (for MRTG-style traffic graphs) is configured separately under the SNMP > Counters sub-tab, not in the Retention section.
SNMP Section
Pause All Polling: Emergency toggle at the top of the section. Stops all SNMP collection immediately. Useful during maintenance windows or network incidents.
Below the pause banner, the SNMP section is organized into 4 horizontal sub-tabs. Each sub-tab shows a blue dot when it has unsaved changes. Cross-tab warnings (e.g., "both subsystems disabled") appear above the tab bar when applicable.
Polling sub-tab:
- Interface Discovery & Enrichment (default: enabled): Full IF-MIB walks at the configured discovery interval. Discovers interface names, speeds, MTU, IP mappings, and OSPF timers. Disable if you have a stable network and want to reduce SNMP load — existing interface data is preserved.
- Discovery Interval (hours): How often the poller runs IF-MIB discovery/enrichment cycles (range 1--48, default: 6). This is dynamic — the poller re-reads the interval after each cycle completes. The YAML config value (
discovery_interval_seconds) is used only as a startup fallback. - Counter Poll Interval (minutes): Polling frequency for new SNMP targets, displayed in minutes (range derived from 10--3600 seconds, default: 5 minutes / 300 seconds).
- PDU Timeout (seconds): SNMP request timeout (range 1--30, default: 5).
- Retries: Number of SNMP retries on failure (range 0--10, default: 2).
- Consecutive Failure Limit: Consecutive poll failures before a target is automatically disabled (range 1--100, default: 10). A warning appears if set to 2 or below, as brief network interruptions could trigger disabling.
Enrichment sub-tab:
- Default Credential Profile: Select a saved credential profile as the default for auto-discovery (replaces the need to configure community strings per-target).
- Fallback Credential Profile: Optional secondary credential profile. When primary credentials fail (connection error or empty response), the poller automatically retries with this profile. A single failure is counted regardless of whether the fallback attempt succeeds. When the fallback succeeds, the poller logs a warning suggesting the target's credential profile be updated.
Counters sub-tab:
- Traffic Counters (default: enabled): Per-interface byte/packet/error counters collected every poll interval (default 5 minutes). Drives utilization graphs, congestion alerts, and traffic-based reports. Disable if you only need topology discovery without traffic monitoring.
- Stats Retention (days): Days to keep per-interface traffic history for MRTG-style graphs (range 1--365, default: 30). Controls how far back utilization history charts can display.
Both discovery and counters can be toggled independently. For example, you might keep discovery enabled but disable counters to reduce SNMP traffic on a bandwidth-constrained management network.
L2 Discovery sub-tab:
- L2 Enrichment: Master toggle for LLDP/CDP neighbor discovery on monitored routers.
- Switch Crawling: Discover neighboring switches (bridge capability) via LLDP/CDP. Phones, APs, and endpoints are excluded.
- Crawl Interval (hours): How often the L2 crawler runs a BFS pass, displayed in hours (range derived from 5--1440 minutes, default: 6 hours / 360 minutes).
- Max Crawl Depth: Maximum BFS hop depth from nearest OSPF router seed (range 1--10, default: 3).
- Neighbor Expiry (hours): Hours before unseen L2 neighbors are pruned (range 1--8760, default: 72 / 3 days).
These are global defaults. Individual networks can override the enrichment and crawl toggles, as well as credential profiles, via the network edit form in the sidebar. See Per-Network L2 Configuration.
Security Section
Password Policy:
- Minimum Password Length: Range 4--72 (bcrypt limit). Default: 8.
- Require Uppercase Letter: Default: enabled.
- Require Number: Default: enabled.
- Require Special Character: Default: enabled.
Login Protection:
- Max Login Attempts: Consecutive failed logins before lockout. Set to 0 for unlimited (no lockout). Default: 5. A warning appears if set to 1.
- Lockout Duration (minutes): How long to lock an account after exceeding max login attempts (range 1--1440). Default: 15 minutes. This field is disabled and grayed out when Max Login Attempts is set to 0.
Transport:
- Disable SSH Terminal Proxy: When enabled, the SSH terminal proxy feature is completely disabled. The right-click "SSH to" context menu item is hidden, the SSH button in node detail panels is removed, and WebSocket connections for SSH are rejected at the API level with 403. Default: disabled (SSH proxy is available). Use this in environments where the proxy is not permitted by security policy.
- Allow Telnet Fallback: When enabled, the SSH terminal proxy falls back to cleartext telnet (port 23) if SSH (port 22) fails. Default: disabled. Warning: telnet transmits credentials in cleartext. This setting has no effect when the SSH terminal proxy is disabled.
License Section
- Status table: Shows current license state — licensee, tier, node limit, current nodes, validity dates, and days remaining.
- Upload: Paste a license key into the text area and click Upload License. The key is validated against the embedded Ed25519 public key. On success, the license is activated immediately and persisted to the database.
- Evaluation mode: When no license is installed, Osprey runs in evaluation mode with full functionality and a hard limit of 16 monitored nodes. The sidebar footer shows "Evaluation Mode — 16 nodes maximum".
- Grace period: Expired licenses continue working for 30 days with amber UI warnings (top banner + sidebar). After the grace period, Osprey reverts to evaluation mode.
- Node enforcement: When the node limit is reached, new devices are not discovered. Existing devices continue receiving updates normally.
Audit Log
Admin > Audit Log
Immutable record of all administrative actions:
- Columns: Timestamp, User, Action (create/update/delete/toggle/refresh), Entity Type, Entity ID, IP Address
- Detail: Expandable JSONB detail showing what changed
- Filtering: By user, action type, entity type, and time range
- Export: CSV download
- Retention: Configurable in System Settings (default: 90 days)
Every admin action is automatically logged: creating users, modifying alert rules, changing system settings, enabling/disabling collectors, deleting devices, DNS refresh triggers, and more.
Backup & Restore
Admin > Backup & Restore
Osprey offers two backup levels:
| Configuration Backup (JSON) | Database Backup (SQL) | |
|---|---|---|
| What | Configuration only | Everything |
| Format | Readable JSON, portable | Raw SQL (pg_dump) |
| Use case | Migration to a new installation | Disaster recovery |
| Hierarchy | ✅ Networks, ASes, RDs, PIs, Areas | ✅ |
| Collectors & SNMP targets | ✅ (credentials redacted) | ✅ |
| Users & roles | ✅ (no passwords) | ✅ (with password hashes) |
| Alert rules & notifications | ✅ | ✅ |
| System settings | ✅ | ✅ |
| Maintenance windows | ✅ | ✅ |
| Topology (devices, links, interfaces) | ❌ (re-discovered automatically) | ✅ |
| Canvas layouts | ❌ | ✅ |
| User settings (theme, preferences) | ❌ | ✅ |
| Icon packs (imported Visio stencils) | ❌ | ✅ |
| SNMP credential profiles | ❌ | ✅ |
| API keys | ❌ | ✅ |
| SSH known hosts | ❌ | ✅ |
| Event history & time-travel data | ❌ | ✅ |
| Audit log | ❌ | ✅ |
| Restore mode | Additive (skips existing) | Destructive (replaces all) |
Export Configuration
Click Export Configuration to download a JSON file with hierarchy, collectors, users, alert rules, SNMP targets, and system settings. This backup is designed for quick migration — after restoring, topology is automatically re-discovered by the collectors.
Export Database
Click Export Database for a complete PostgreSQL dump. This includes all data: topology, canvas layouts, event history, icon packs, audit log, and everything in the configuration backup.
Encryption key note: The database dump contains SNMP credentials and notification channel secrets encrypted with the installation's OSPREY_ENCRYPTION_KEY. This SQL file can only be restored on an installation that uses the same encryption key. If you are migrating to a new machine, copy the encryption key first (see Migrating to a New Machine below).
Restore Configuration
- Click the file picker under Restore and select a previously exported
.jsonfile. - Review the entity count preview.
- Click Import Configuration, then confirm.
Configuration restore is additive — existing entities with matching names are skipped, not duplicated. No data is deleted. Credentials (SNMP community strings, notification passwords) are not included in configuration backups — you will need to re-enter them after restore.
Restore Database
- Click the file picker under Restore Database and select a previously exported
.sqlfile. - Review the file name and size.
- Click Restore Database, then confirm.
Warning: Database restore is destructive — it replaces the entire database. All current data is lost and replaced with the contents of the SQL dump. Services should be restarted after a database restore.
Migrating to a New Machine
To migrate Osprey to a new installation with full history and working credentials:
- Export the database on the source machine (Admin > Backup & Restore > Export Database).
- Copy the encryption key from the source machine to the new machine:
- The key is
OSPREY_ENCRYPTION_KEYin/etc/osprey/osprey.env.
- The key is
- Install Osprey on the new machine and configure it with the copied encryption key.
- Restore the database on the new machine (Admin > Backup & Restore > Restore Database).
- Restart all services after the restore completes.
Without the matching encryption key, the database restore will succeed but all encrypted credentials (SNMP communities, v3 passwords, notification channel secrets) will be unreadable. You would need to re-enter them manually.
If you only need to migrate configuration without history, use Export Configuration instead — it produces a portable JSON file that works on any installation (credentials are excluded and must be re-entered).
12. Keyboard Shortcuts
Access the full shortcut reference via Help > Keyboard Shortcuts or press ?.
Navigation
| Shortcut | Action |
|---|---|
Ctrl+K / Cmd+K |
Search |
Escape |
Minimize focused panel (or close dialog / deselect) |
Shift+Escape |
Close focused panel |
Panel Management
| Shortcut | Action |
|---|---|
Ctrl+Shift+M |
Minimize all open panels |
Ctrl+Shift+R |
Restore all minimized panels |
Topology Canvas
| Shortcut | Action |
|---|---|
+ / = |
Zoom in |
- |
Zoom out |
0 |
Fit to screen |
| Mouse wheel (scroll) | Zoom in/out |
| Click + drag on canvas | Pan |
| Click node | Select device |
| Click edge | Select link |
| Right-click | Context menu |
Selection
| Shortcut | Action |
|---|---|
Delete |
Delete selected device (admin only, stale/down devices) |
Time Travel
| Shortcut | Action |
|---|---|
Space |
Play/pause playback |
| Left arrow | Step backward |
| Right arrow | Step forward |
Help Menu
The Help menu provides:
- User Guide: Opens this user guide in a new tab.
- Keyboard Shortcuts: Opens the shortcut reference (also accessible via
?). - About Osprey: Shows version and build information.
13. Configuration Reference
Server Configuration (/etc/osprey/osprey.yaml)
database:
host: localhost # PostgreSQL host
port: 5432 # PostgreSQL port
name: osprey # Database name
user: osprey # Database user
password: ${OSPREY_DB_PASSWORD} # From environment
max_connections: 40 # Per-service pool size (5 services x 40 = 200 total)
ssl_mode: disable # disable | require | verify-ca | verify-full
nats:
url: ${OSPREY_NATS_URL} # nats://localhost:4222
token: ${OSPREY_NATS_TOKEN} # Optional NATS auth token
api:
listen: "127.0.0.1:8080" # API listen address (nginx proxies from 443)
ssh_recording: true # Record SSH/telnet sessions for audit
cors_origins: # Allowed CORS origins
- "https://your-domain.com"
auth:
jwt_secret: ${OSPREY_JWT_SECRET} # Must be 32+ bytes
access_token_ttl: "15m" # JWT token lifetime
refresh_token_ttl: "168h" # Refresh token lifetime (7 days)
bcrypt_cost: 12 # Password hashing cost
secure_cookies: true # HTTPS-only cookies
# Credential at-rest encryption (AES-256-GCM).
# Generate: openssl rand -base64 32
# If empty, credentials are stored in plaintext (backwards compatible).
encryption_key: ${OSPREY_ENCRYPTION_KEY:-}
# License file (Ed25519-signed). Upload via Admin UI or place file here.
# If missing/empty, runs in evaluation mode (16 nodes, full features).
license_file: /etc/osprey/license.key
topology:
stale_retention_hours: 168 # 7 days; overridable via Admin > System Settings
snmp:
discovery_interval_seconds: 21600 # Startup fallback only; overridden by per-network discovery_interval_hours (default 6h)
stats_retention_days: 30 # Days to keep interface stats
bmp:
listen_address: "127.0.0.1:11019" # BMP TCP listener; change to 0.0.0.0:11019 to accept router connections
# allowed_cidrs: [] # CIDR allowlist (empty = only bmp_target.router_ip allowed)
max_connections: 50 # Maximum concurrent BMP sessions
max_connections_per_ip: 2 # Per-source-IP connection limit
idle_timeout: "120s" # Close idle BMP sessions after this duration
allow_nat_fallback: false # Match BMP sysName when source IP differs from target IP (NAT)
metrics:
enabled: true
listen: ":9090" # Prometheus metrics endpoint
logging:
level: info # debug | info | warn | error
format: text # json | text
Environment Variables (/etc/osprey/osprey.env)
| Variable | Required | Description |
|---|---|---|
OSPREY_DB_HOST |
Yes | PostgreSQL host (default: localhost) |
OSPREY_DB_PASSWORD |
Yes | PostgreSQL password (auto-generated on install) |
OSPREY_NATS_URL |
Yes | NATS server URL (default: nats://127.0.0.1:4222) |
OSPREY_NATS_TOKEN |
No | NATS authentication token (auto-generated on install if NATS is configured with token auth) |
OSPREY_JWT_SECRET |
Yes | JWT signing secret (auto-generated, 32+ bytes) |
OSPREY_ENCRYPTION_KEY |
Recommended | AES-256 key for SNMP credential at-rest encryption (auto-generated, base64-encoded 32 bytes). If unset, credentials are stored in plaintext. |
OSPREY_METRICS_LISTEN |
No | Override metrics listen address per service. Set in each service's systemd unit: engine :9090, collector-manager :9091, SNMP poller :9092, API :9093, BMP server :9094. |
The environment file is created at install time with PLACEHOLDER values that the postinst script replaces with randomly generated secrets. The file is owned by root:osprey with mode 0640 to protect credentials.
Service Ports
| Service | Port | Purpose |
|---|---|---|
| API | 8080 | REST API + WebSocket |
| nginx | 443 (HTTPS), 80 (redirects to 443) | HTTPS frontend + API proxy |
| BMP Server | 11019 (TCP) | BGP Monitoring Protocol listener |
| Engine metrics | 9090 | Prometheus |
| Collector Manager metrics | 9091 | Prometheus |
| SNMP Poller metrics | 9092 | Prometheus |
| API metrics | 9093 | Prometheus |
| BMP Server metrics | 9094 | Prometheus |
| NATS | 4222 | Message bus |
| PostgreSQL | 5432 | Database |
Systemd Services
Osprey runs as five systemd services grouped under osprey.target:
| Service | Unit Name | Runs As | Description |
|---|---|---|---|
| Engine | osprey-engine |
osprey |
Topology processing, event correlation, snapshot creation |
| API | osprey-api |
osprey |
REST API, WebSocket, SSH proxy |
| Collector Manager | osprey-collector-manager |
root |
GRE tunnel and SNMP discovery collectors (requires NET_ADMIN + NET_RAW) |
| SNMP Poller | osprey-snmp-poller |
osprey |
Interface statistics and device enrichment |
| BMP Server | osprey-bmp-server |
osprey |
BGP Monitoring Protocol listener (TCP 11019) |
All five services depend on PostgreSQL and NATS (After=postgresql.service nats-server.service). They automatically restart on failure (Restart=on-failure, RestartSec=5) and have a file descriptor limit of 65536 (LimitNOFILE=65536).
# Check all Osprey services at once
systemctl status osprey.target
# Restart all services
sudo systemctl restart osprey.target
# Restart a single service
sudo systemctl restart osprey-api
# View logs for a specific service (follow mode)
sudo journalctl -u osprey-api -f
sudo journalctl -u osprey-engine -f
sudo journalctl -u osprey-collector-manager -f
sudo journalctl -u osprey-snmp-poller -f
sudo journalctl -u osprey-bmp-server -f
# View recent logs (last 100 lines, no pager)
sudo journalctl -u osprey-engine --no-pager -n 100
# View logs since last boot
sudo journalctl -u osprey-api -b
# Enable/disable a service
sudo systemctl enable osprey-api
sudo systemctl disable osprey-snmp-poller
File Locations (Debian Package)
| Path | Contents |
|---|---|
/usr/bin/osprey |
Main binary (all subcommands) |
/etc/osprey/osprey.yaml |
Server configuration |
/etc/osprey/osprey.env |
Environment secrets (0640 root:osprey) |
/etc/osprey/nats.conf |
NATS server configuration |
/etc/osprey/certs/ |
TLS certificate and key |
/usr/share/osprey/web/ |
Frontend static files (served by nginx) |
/etc/nginx/sites-available/osprey |
nginx site configuration |
/var/lib/osprey/ |
Data directory (owned by osprey user) |
/var/lib/nats/jetstream/ |
NATS JetStream data |
nginx Configuration
The default nginx site (/etc/nginx/sites-available/osprey) provides:
- HTTP to HTTPS redirect (port 80 to 443)
- TLS termination with self-signed certificate
- API reverse proxy:
/api/requests forwarded to127.0.0.1:8080 - WebSocket support:
UpgradeandConnectionheaders proxied, with 86400s read timeout - SPA fallback: all non-API, non-asset requests serve
index.html - Cache headers: hashed assets (
/assets/) cached for 1 year withimmutable;index.htmlis never cached - Security headers:
X-Content-Type-Options,X-Frame-Options,Strict-Transport-Security
14. Troubleshooting
Cannot Log In
- Default credentials:
admin/admin. - Account locked: If login protection is enabled (Max Login Attempts > 0 in System Settings), wait for the lockout duration (default 15 minutes) or ask another admin to unlock the account. With the default settings, lockout triggers after 5 consecutive failures.
- Browser cookies: Ensure cookies are enabled. Osprey uses HTTP-only secure cookies for JWT auth. Third-party cookie blocking or privacy extensions can interfere.
- HTTPS certificate: Accept the self-signed certificate in your browser. Some browsers (especially Safari) block cookies on untrusted HTTPS origins.
- Mixed content: If you access Osprey over HTTP instead of HTTPS and
secure_cookies: trueis set in the config, the browser will reject the cookies. Either use HTTPS or setsecure_cookies: false(not recommended for production). - Clock skew: JWT tokens have a 15-minute lifetime by default. If the server clock is significantly ahead of the client, tokens may appear expired immediately. Ensure NTP is running on the server.
No Topology Data
- Check collectors: Verify at least one collector is running (green status in sidebar). Open the sidebar and look for the collector status badges.
- GRE tunnels: Ensure the remote router has a matching GRE tunnel configured and the IGP is enabled on the tunnel interface. For OSPF, check
show ip ospf neighbor. For IS-IS, checkshow clns neighbororshow isis adjacency. Verify IP connectivity between the Osprey server and the router's GRE endpoint (ping). - SNMP discovery: Verify SNMP is reachable from the Osprey server:
snmpwalk -v2c -c community target-ip 1.3.6.1.2.1.1.1. Check firewall rules for UDP 161. For SNMPv3, verify that engine ID, username, auth, and privacy settings match exactly. - Collector manager logs:
sudo journalctl -u osprey-collector-manager -f-- look for "starting collector" or error messages. - Engine logs:
sudo journalctl -u osprey-engine -f-- look for snapshot processing messages ("processing snapshot for area..."). - NATS connectivity: Verify NATS is running:
sudo systemctl status nats-server. Check that both the engine and collector-manager can connect (look for "connected to NATS" in their logs). - Hierarchy mismatch: If you deleted and re-created hierarchy entities (networks, areas), old collectors may be orphaned. Check Admin > Monitoring for stale collector configs.
No Traffic Data
- SNMP targets: Check Admin > Monitoring > SNMP Targets for target status. Active targets show a green status.
- Consecutive failures: Targets auto-disable after 10 consecutive failures (configurable in System Settings). Re-enable them manually by clicking the enable toggle.
- Credentials: Verify SNMP credentials are correct. For v3, auth protocol, auth password, privacy protocol, and privacy password must all match the device configuration exactly.
- Firewall: Ensure UDP 161 is open from the Osprey server to the managed devices. Also verify that SNMP ACLs on the device permit the Osprey server IP.
- Poller logs:
sudo journalctl -u osprey-snmp-poller -f-- look for poll success/failure messages and error details. - Utilization not showing on canvas: Verify that View > Color > By Utilization is selected. Utilization data takes one poll interval (default 5 minutes) to appear after targets are added.
No BGP Data
- BMP target status: Check the sidebar under the BGP protocol instance -- each target shows a status indicator (green = connected, grey = pending, red = error). Alternatively, check via
GET /api/v1/bgp/targets. Ifpending, the router hasn't connected yet. - Router BMP config: Verify the router is configured to send BMP to the correct IP and port (default TCP 11019). Check
show bmp serveror equivalent on the router. - Firewall: Ensure TCP 11019 is open inbound to the Osprey server from the router's management IP.
- Connection filtering: The BMP server only accepts connections from IPs matching registered BMP targets (
bmp_target.router_ip). The optionalbmp.allowed_cidrsinosprey.yamladds an additional CIDR allowlist filter on top of this. Ifbmp.allow_nat_fallbackis enabled, connections from unknown IPs are accepted and correlated by BMP sysName instead. - RIB mode: If the BMP target's RIB mode is set to
none, peers are tracked but no routes are processed. Change toloc_riboradj_rib_in_postto see route data. - BMP server logs:
sudo journalctl -u osprey-bmp-server -f-- look for "BMP Peer Up", "End-of-RIB", or error messages. - Engine logs:
sudo journalctl -u osprey-engine -f-- look for "BGP End-of-RIB sync complete" or "persisted BGP best-path deltas". - Peers show but no routes: The initial RIB dump can take 30-60 seconds for a full internet table. Wait for the "End-of-RIB" log message. If the BMP target uses
adj_rib_in_post, routes only appear after the router sends Route Monitoring messages.
Stale Devices Won't Disappear
Unreachable devices remain visible for the configured retention period (default 7 days). To change this:
- Admin > System Settings > Topology > Stale Device Retention: Reduce the hours (minimum 1 hour). Note that setting this below 24 hours risks removing devices during brief maintenance windows.
- Manual deletion: Right-click a stale or down device on the canvas and select Delete device (admin only). This permanently removes the device from the database.
Stale Devices Reappearing After Deletion
If you delete a device or hierarchy entity but it reappears, a running collector is likely re-discovering and re-creating it. Stop or delete the associated collector first, then delete the entity. When deleting a hierarchy entity (network, domain, protocol instance, area) via the API or sidebar, Osprey automatically disables associated collectors to prevent this.
Engine / Collector Manager / SNMP Poller Shows "Down"
The System Health popover shows heartbeat-based liveness for backend services. If a service shows "Down" or "Not responding":
- Verify the service is running:
sudo systemctl status osprey-engine(orosprey-collector-manager,osprey-snmp-poller). - Check the service logs for errors:
sudo journalctl -u osprey-engine -n 50. - Verify NATS is running -- heartbeats are published via NATS, so a NATS outage will cause all three services to appear down.
- After restarting a stopped service, its status recovers to "Healthy" within ~15 seconds (the heartbeat interval).
- The engine is treated as critical -- if it is unhealthy, the overall system status degrades to "Degraded" in the status bar.
WebSocket Disconnections
The bottom-right status bar shows WebSocket connection state. If it shows a red indicator:
- Check that nginx is properly proxying WebSocket upgrades. The default config includes
proxy_set_header Upgrade $http_upgradeandproxy_set_header Connection "upgrade"with an 86400s read timeout. - Verify the API service is running:
sudo systemctl status osprey-api. - If behind an external load balancer or reverse proxy, ensure it supports WebSocket upgrades and has a sufficiently long idle timeout (Osprey WebSocket connections are long-lived).
- Check for firewalls or corporate proxies that may be terminating long-lived connections.
- The UI automatically reconnects when the WebSocket drops. If you see frequent reconnections, check network stability between the browser and server.
UI Crash Recovery
If a rendering error occurs in the topology canvas, activity tray, or panel stack, Osprey isolates the failure to the affected zone. A fallback panel appears with the error message and a Retry button. The rest of the UI continues functioning normally. Clicking Retry re-renders the failed zone. Switching areas or navigating away also resets the error state automatically.
DNS Names Not Resolving
- PTR records: Osprey resolves reverse DNS (PTR) records for router IDs and interface IPs. Ensure PTR records exist in your DNS infrastructure for the relevant IP addresses.
- Trigger refresh: Use Tools > Refresh DNS to force re-resolution of all cached IPs. The engine clears its DNS cache and re-resolves asynchronously -- results appear within seconds.
- Display mode: Check Admin > System Settings > Display > Device Name Format -- if set to
router_id, DNS names are not used for labels. Set it todnsorhostnameinstead. - DNS server configuration: The Osprey engine uses the system resolver (
/etc/resolv.conf). Verify the DNS servers configured there can resolve PTR records for your network IP ranges.
Database Issues
# Check PostgreSQL is running
sudo systemctl status postgresql
# Check database exists
sudo -u postgres psql -l | grep osprey
# Check if the osprey user can connect
sudo -u postgres psql -U osprey -d osprey -c "SELECT 1;"
# Run migrations manually (use the password from /etc/osprey/osprey.env)
/usr/bin/osprey migrate --db-url "postgres://osprey:PASSWORD@localhost:5432/osprey?sslmode=disable"
# Check migration state
sudo -u postgres psql osprey -c "SELECT version, dirty FROM schema_migrations;"
Dirty migration state: If a migration failed partway through, schema_migrations will show dirty=true. To fix:
- Check which version is dirty:
SELECT version, dirty FROM schema_migrations; - Manually inspect and fix the database state for that migration version.
- Set dirty to false:
UPDATE schema_migrations SET dirty=false; - Re-run migrations:
/usr/bin/osprey migrate --db-url "..."
Disk space: PostgreSQL requires free disk space for WAL (write-ahead log) and temporary files. If the disk is full, PostgreSQL may stop accepting writes. Free space and restart: sudo systemctl restart postgresql.
Service Won't Start
# Check service logs for the specific error
sudo journalctl -u osprey-engine --no-pager -n 50
sudo journalctl -u osprey-api --no-pager -n 50
sudo journalctl -u osprey-collector-manager --no-pager -n 50
sudo journalctl -u osprey-snmp-poller --no-pager -n 50
sudo journalctl -u osprey-bmp-server --no-pager -n 50
Common error messages and solutions:
| Error | Cause | Solution |
|---|---|---|
connection refused (port 5432) |
PostgreSQL not running | sudo systemctl start postgresql |
connection refused (port 4222) |
NATS not running | sudo systemctl start nats-server |
permission denied |
File permissions wrong | Check ownership: ls -la /etc/osprey/. The .env file should be root:osprey 0640. |
address already in use |
Another process on the port | Find it: sudo ss -tlnp | grep :8080 and stop the conflicting process. |
migration dirty |
A migration failed mid-way | See Database Issues above for dirty migration fix. |
YAML parse error |
Syntax error in config | Validate: python3 -c "import yaml; yaml.safe_load(open('/etc/osprey/osprey.yaml'))" |
invalid JWT secret |
Secret too short | Generate a new one: openssl rand -base64 32 and update /etc/osprey/osprey.env. |
encryption_key not set warning |
No encryption key configured | Generate: openssl rand -base64 32 and set OSPREY_ENCRYPTION_KEY in /etc/osprey/osprey.env. Restart services — existing plaintext credentials are encrypted automatically on startup. |
NATS Issues
# Check NATS status
sudo systemctl status nats-server
# View NATS logs
sudo journalctl -u nats-server -f
# Verify NATS is listening
ss -tlnp | grep 4222
# Test NATS connectivity (if nats CLI is installed)
nats server ping
If NATS fails to start, check that the configuration file exists at /etc/osprey/nats.conf and that the JetStream data directory /var/lib/nats/jetstream exists and is writable.
nginx Issues
# Test nginx configuration
sudo nginx -t
# Check nginx status
sudo systemctl status nginx
# View nginx error log
sudo tail -50 /var/log/nginx/error.log
# Verify the Osprey site is enabled
ls -la /etc/nginx/sites-enabled/osprey
Common issues:
- 502 Bad Gateway: The API service is not running or not listening on port 8080. Check:
sudo systemctl status osprey-api. - SSL certificate errors: Regenerate the self-signed certificate:
sudo openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout /etc/osprey/certs/osprey.key -out /etc/osprey/certs/osprey.crt -subj "/CN=osprey"thensudo systemctl reload nginx. - Port 80/443 conflict: Another web server (Apache, etc.) may be using the ports. Check:
sudo ss -tlnp | grep -E ':80|:443'.
Performance
- Large topologies (1000+ devices): Use the
geometricorgridlayout instead offcose. Force-directed layouts are CPU-intensive and can cause UI lag on large graphs. Disable View > Area Boundaries for better rendering performance. - SNMP polling: Reduce poll frequency for non-critical devices in System Settings. The default 5-minute interval works well for most deployments. Consider increasing the PDU timeout for high-latency WAN devices.
- Event and snapshot retention: Reduce retention in System Settings if disk space is constrained. Events and snapshots both default to 90 days.
- Browser memory: Close or minimize the Link Detail Panel when not actively monitoring traffic (it triggers 5-second boosted polling). Hide unnecessary columns in report panels. Minimized panels use
display: noneand preserve state without active polling. For very large topologies, use filters (View > Filters) to reduce the number of rendered nodes. - Database growth: The largest tables are typically
topology_eventandtopology_snapshot. Monitor database size with:sudo -u postgres psql osprey -c "SELECT pg_size_pretty(pg_database_size('osprey'));". The retention settings in System Settings control automatic purging. - SNMP poller concurrency: The SNMP poller walks all targets sequentially within each poll interval. If you have many targets and polls are taking longer than the interval, increase the interval or split targets across multiple poller instances.
Upgrading
When upgrading Osprey (installing a newer .deb package):
- The postinst script automatically runs database migrations.
- All five services are restarted.
- Existing configuration in
/etc/osprey/osprey.yamland/etc/osprey/osprey.envis preserved (these are conffiles). - If a migration fails, the error is printed but the package install continues. Run migrations manually after fixing the issue.
# Upgrade
sudo apt install ./osprey_<new-version>_amd64.deb
# Verify all services are running after upgrade
systemctl status osprey.target
# If migrations failed, run manually
source /etc/osprey/osprey.env
/usr/bin/osprey migrate --db-url "postgres://osprey:${OSPREY_DB_PASSWORD}@localhost:5432/osprey?sslmode=disable"
Osprey is proprietary software. All rights reserved.