mirror of
https://gitee.com/wanwujie/deer-flow
synced 2026-04-20 12:54:45 +08:00
* feat: add agent management functionality with creation, editing, and deletion * feat: enhance agent creation and chat experience - Added AgentWelcome component to display agent description on new thread creation. - Improved agent name validation with availability check during agent creation. - Updated NewAgentPage to handle agent creation flow more effectively, including enhanced error handling and user feedback. - Refactored chat components to streamline message handling and improve user experience. - Introduced new bootstrap skill for personalized onboarding conversations, including detailed conversation phases and a structured SOUL.md template. - Updated localization files to reflect new features and error messages. - General code cleanup and optimizations across various components and hooks. * Refactor workspace layout and agent management components - Updated WorkspaceLayout to use useLayoutEffect for sidebar state initialization. - Removed unused AgentFormDialog and related edit functionality from AgentCard. - Introduced ArtifactTrigger component to manage artifact visibility. - Enhanced ChatBox to handle artifact selection and display. - Improved message list rendering logic to avoid loading states. - Updated localization files to remove deprecated keys and add new translations. - Refined hooks for local settings and thread management to improve performance and clarity. - Added temporal awareness guidelines to deep research skill documentation. * feat: refactor chat components and introduce thread management hooks * feat: improve artifact file detail preview logic and clean up console logs * feat: refactor lead agent creation logic and improve logging details * feat: validate agent name format and enhance error handling in agent setup * feat: simplify thread search query by removing unnecessary metadata * feat: update query key in useDeleteThread and useRenameThread for consistency * feat: add isMock parameter to thread and artifact handling for improved testing * fix: reorder import of setup_agent for consistency in builtins module * feat: append mock parameter to thread links in CaseStudySection for testing purposes * fix: update load_agent_soul calls to use cfg.name for improved clarity * fix: update date format in apply_prompt_template for consistency * feat: integrate isMock parameter into artifact content loading for enhanced testing * docs: add license section to SKILL.md for clarity and attribution * feat(agent): enhance model resolution and agent configuration handling * chore: remove unused import of _resolve_model_name from agents * feat(agent): remove unused field * fix(agent): set default value for requested_model_name in _resolve_model_name function * feat(agent): update get_available_tools call to handle optional agent_config and improve middleware function signature --------- Co-authored-by: Willem Jiang <willem.jiang@gmail.com>
83 lines
4.3 KiB
Markdown
83 lines
4.3 KiB
Markdown
# Conversation Guide
|
||
|
||
Detailed strategies for each onboarding phase. Read this before your first response.
|
||
|
||
## Phase 1 — Hello
|
||
|
||
**Goal:** Establish preferred language. That's it. Keep it light.
|
||
|
||
Open with a brief multilingual greeting (3–5 languages), then ask one question: what language should we use? Don't add anything else — let the user settle in.
|
||
|
||
Once they choose, switch immediately and seamlessly. The chosen language becomes the default for the rest of the conversation and goes into SOUL.md.
|
||
|
||
**Extraction:** Preferred language.
|
||
|
||
## Phase 2 — You
|
||
|
||
**Goal:** Learn who the user is, what they need, and what to call the AI.
|
||
|
||
This phase typically takes 2 rounds:
|
||
|
||
**Round A — Identity & Pain.** Ask who they are and what drains them. Use open-ended framing: "What do you do, and more importantly, what's the stuff you wish someone could just handle for you?" The pain points reveal what the AI should *do*. Their word choices reveal who they *are*.
|
||
|
||
**Round B — Name & Relationship.** Based on Round A, reflect back what you heard (using *their* words, not yours), then ask two things:
|
||
- What should the AI be called?
|
||
- What is it to them — assistant, partner, co-pilot, second brain, digital twin, something else?
|
||
|
||
The relationship framing is critical. "Assistant" and "partner" produce very different SOUL.md files. Pay attention to the emotional undertone.
|
||
|
||
**Merge opportunity:** If the user volunteers their role, pain points, and a name all at once, skip Round B and move to Phase 3.
|
||
|
||
**Extraction:** User's name, role, pain points, AI name, relationship framing.
|
||
|
||
## Phase 3 — Personality
|
||
|
||
**Goal:** Define how the AI behaves and communicates.
|
||
|
||
This is the meatiest phase. Typically 2 rounds:
|
||
|
||
**Round A — Traits & Pushback.** By now you've observed the user's own style. Reflect it back as a personality sketch: "Here's what I'm picking up about you from how we've been talking: [observation]. Am I off?" Then ask the big question: should the AI ever disagree with them?
|
||
|
||
This is where you get:
|
||
- Core personality traits (as behavioral rules)
|
||
- Honesty / pushback preferences
|
||
- Any "never do X" boundaries
|
||
|
||
**Round B — Voice & Language.** Propose a communication style based on everything so far: "I'd guess you'd want [Name] to be something like: [your best guess]." Let them correct. Also ask about language-switching rules — e.g., technical docs in English, casual chat in another language.
|
||
|
||
**Merge opportunity:** Direct users often answer both in one shot. If they do, move on.
|
||
|
||
**Extraction:** Core traits, communication style, pushback preference, language rules, autonomy level.
|
||
|
||
## Phase 4 — Depth
|
||
|
||
**Goal:** Aspirations, failure philosophy, and anything else.
|
||
|
||
This phase is adaptive. Pick 1–2 questions from:
|
||
|
||
- **Autonomy & risk:** How much freedom should the AI have? Play safe or go big?
|
||
- **Failure philosophy:** When it makes a mistake — fix quietly, explain what happened, or never repeat it?
|
||
- **Big picture:** What are they building toward? Where does all this lead?
|
||
- **Blind spots:** Any weakness they'd want the AI to quietly compensate for?
|
||
- **Dealbreakers:** Any "if [Name] ever does this, we're done" moments?
|
||
- **Personal layer:** Anything beyond work that the AI should know?
|
||
|
||
Don't ask all of these. Pick based on what's still missing from the extraction tracker and what feels natural in the flow.
|
||
|
||
**Extraction:** Failure philosophy, long-term vision, blind spots, boundaries.
|
||
|
||
## Conversation Techniques
|
||
|
||
**Mirroring.** Use the user's own words when reflecting back. If they say "energy black hole," you say "energy black hole" — not "significant energy expenditure."
|
||
|
||
**Genuine reactions.** Don't just extract data. React: "That's interesting because..." / "I didn't expect that" / "So basically you want [Name] to be the person who..."
|
||
|
||
**Observation-based proposals.** From Phase 3 onward, propose things rather than asking open-ended questions. "Based on how we've been talking, I'd say..." is more effective than "What personality do you want?"
|
||
|
||
**Pacing signals.** Watch for:
|
||
- Short answers → they want to move faster. Probe once, then advance.
|
||
- Long, detailed answers → they're invested. Acknowledge the richness, distill the key points.
|
||
- "I don't know" → offer 2–3 concrete options to choose from.
|
||
|
||
**Graceful skipping.** If the user says "I don't care about that" or gives a minimal answer to a non-required field, move on without pressure.
|