What is an AI agent?
An AI agent is software that reads business context, works through a job, and proposes or takes actions based on the workflow you define. In Pinksheep, the clearest product truth is that you describe the work in plain English, the product turns that into a plan, and the agent shows what it will do before it does it.
For a first launch, think about one narrow workflow. The goal is not to automate a whole department on day one. It is to get one useful job running with clear permissions, visible costs, and approvals where you need them.
Before you start
Workflow
Pick one job with a clear start, a clear output, and a clear owner. Good first examples are triage, routing, enrichment, or drafting.
Plan
The builder generates an agent plan with steps, permissions, schedule or trigger details, and an estimated credit range per run.
Approvals
If the workflow can write to an external system, keep approvals in place while you are learning how the agent behaves.
Connections
If the plan needs access to a business app, connect that app before deployment so the workflow can validate cleanly.
Validation
Run validation before deployment so you can catch obvious plan, rule, or connection gaps early.
Spend cap
Use the monthly spend cap after launch if you want a clear boundary on how much a single agent can consume.
Step 1: Describe the job in plain English
Start with one useful workflow that a business owner can judge quickly. A good first example is a review workflow such as: "Each morning, review new inbound leads, suggest the right owner, explain why, and wait for approval before changing anything."
The important part is not the exact wording. It is the shape of the job: what data the agent should look at, what decision it should make, and whether it should ask before writing back.
Keep the first version narrow. If the prompt tries to cover triage, enrichment, routing, escalation, and reporting all at once, the plan becomes harder to review and harder to trust.
Step 2: Review the generated plan
Once the builder has enough context, review the plan before you deploy anything. The product already shows the parts that matter most: the execution steps, the permissions involved, the schedule or trigger, the estimated credits per run, and any rule-based warnings.
This is the point where you should slow down. Check that the plan is reading the right sources, writing only where you expect, and keeping approvals in place for sensitive actions.
If the builder highlights missing connections, stop and fix those first. If the plan is too broad, tighten the prompt and regenerate rather than trying to rescue a messy first version after deployment.
Step 3: Connect, validate, and deploy
If the plan depends on a business app, connect it from the builder when prompted. The goal is to remove auth gaps before you try to validate or deploy the workflow.
Next, run validation. This is the safest checkpoint before a live deployment because it lets you confirm the workflow is in a shape that the product is willing to run.
When the plan, permissions, and validation look right, deploy with approvals still active. Then watch the first real runs, review pending approvals, and tighten the instructions or spend cap after you have real usage in front of you.
What to build next
After the first agent is live, the next useful pages are the ones that help you tighten the operating model around it:
Frequently asked questions
Do I need to know how to code to build my first agent?
No. The core product truth is that you describe the job in plain English, review the generated plan, connect the required tools, and deploy when the permissions and cost estimate look right.
What should I look at before I deploy?
Review the generated steps, permissions, schedule or trigger, estimated credits per run, and any rules or approval constraints. Do not deploy until the plan matches the workflow you actually want.
When do approvals matter?
Approvals matter whenever the workflow writes to an external system or needs a human checkpoint. Keep that review step in place while you are learning how the agent behaves on real data.
What if the builder asks me to connect an app?
That is expected. If the plan has auth gaps, connect the required app first, then re-run validation or deployment once the workflow has the access it needs.
How do I improve the first version after launch?
Watch the first real runs, review approvals, then tighten the instructions, schedule, permissions, spend cap, and notifications based on what the workflow actually needs.