Skip to main content

Overview

AI Agent Observability provides full visibility into how the Tasks Management AI Agent processes requests. Using LangSmith integration, operations teams can monitor performance, trace decision-making, and quickly identify issues.
Key Benefit: Understand exactly what the AI “thinks” when processing task requests, enabling faster troubleshooting and continuous improvement.
This feature adds monitoring capabilities only. The AI agent continues to work exactly as before with no changes to existing functionality.

What You Can See

Request Processing

Full visibility into how the AI agent processes each task request from start to finish

Decision Transparency

See exactly what the AI “thinks” when filling out task forms, including category and service selection

Performance Metrics

Response times, success rates, and throughput metrics for monitoring agent health

Error Tracking

Quickly identify and resolve issues with detailed error logs and stack traces

Benefits

Faster Troubleshooting

When issues occur with AI-assisted task creation, observability data helps identify:
  • What input the AI received
  • How the AI interpreted the request
  • Where in the process an error occurred
  • What caused unexpected behavior

Better Understanding of AI Decision-Making

See the reasoning behind AI choices:
  • Why a specific category was selected
  • How priority levels are determined
  • What factors influenced service selection
  • How schedule suggestions are generated

Performance Monitoring

Track key metrics to ensure optimal performance:
  • Average response times
  • Success/failure rates
  • Request volume trends
  • Resource utilization

Environments

The AI agent is monitored in both development and production environments:
EnvironmentDashboard NamePurpose
DevelopmentTasks Management Agent - DevTesting and debugging new features
ProductionTasks Management Agent - ProdMonitoring live user interactions

LangSmith Dashboard

Access the LangSmith observability dashboard

How It Works

Trace Flow

  1. User initiates request — User describes a task in the AI-assisted form
  2. Agent receives input — The request is logged and traced
  3. Processing begins — Each step of AI reasoning is recorded
  4. Decisions are made — Category, service, priority selections are logged
  5. Response returned — Final output and timing are captured
  6. Metrics updated — Performance data is aggregated

What Gets Traced

  • Raw user input text
  • Parsed entities and keywords
  • Context information (facility, user role)
  • Request timestamp
  • Category matching logic
  • Service selection criteria
  • Priority determination factors
  • Schedule suggestion reasoning
  • Selected category and service
  • Assigned priority level
  • Suggested schedule
  • Confidence scores (when available)
  • Total request duration
  • Individual step timings
  • Token usage (for LLM calls)
  • Memory utilization
  • Error messages and stack traces
  • Failed step identification
  • Retry attempts
  • Fallback activations

Use Cases

Investigating User-Reported Issues

When a user reports that the AI made an incorrect suggestion:
  1. Find the specific request in LangSmith
  2. Review the input the AI received
  3. Trace the decision-making steps
  4. Identify where the logic diverged
  5. Determine if it’s a training issue or edge case

Monitoring System Health

Regular health checks using observability data:
  • Review daily success rates
  • Check average response times
  • Identify any error spikes
  • Monitor resource utilization trends

Improving AI Performance

Use trace data to enhance the AI:
  • Identify common misclassifications
  • Find patterns in failed requests
  • Discover edge cases for training
  • Measure impact of model updates

Best Practices

Access Requirements

Access to the LangSmith dashboard requires appropriate permissions. Contact your system administrator if you need access to:
  • View traces and logs
  • Access performance dashboards
  • Configure alerts and notifications

FAQ

No. Observability is purely monitoring—it records what the AI does without changing any functionality. The AI agent continues to work exactly as before.
Traces may include task request content for debugging purposes. All data is handled in accordance with AllCare’s privacy and security policies.
Access is restricted to authorized AllCare operations and engineering staff. Contact your administrator for access requests.
Trace retention follows LangSmith’s default policies. Contact the engineering team for specific retention periods.