chuvadenovembro
Verified User
- Joined
- Jul 1, 2019
- Messages
- 21
Hi guys,
NOTE: I apologize for my text (I use a translator), I'm from Brazil.
When I started working on this integration, I initially planned something simple. The idea was to have an AI assist users with site issues, using strict restriction rules to avoid problems.
However, during development, I kept the access open (without strict restrictions) to build out the features. One thing led to another, and I ended up creating almost 100 tools/integrations. The exponential capability of AI is both fascinating and scary at the same time!
Currently, the tools I’ve created (via AI) allow the agent to do almost everything clients usually ask for:
The Problem:I hit a wall I wasn't anticipating at the start: Scalability.Right now, I can't scale this easily because I need to replicate the setup for every access/user. I know I could solve this by exposing an API via HTTP, but that introduces major security concerns. I've already spent a lot of time ensuring the LLM doesn't have access to API info and that reports are sanitized.
I'm currently on vacation and working on this whenever I find a gap, but I wanted to highlight the potential here again. If I simply create a hook between my support system and this integration, the LLM (which has agentic behavior) could read a support ticket, interpret it, and if appropriate, actually execute the fix (for the simple tasks mentioned above). Obviously, this would require broader access.
I’ll continue studying this integration. Without trying to be a doomsayer here, if you don't realize the consequences of what I described above, check out the screenshots attached. I asked for simple things, and you can see the AI's performance using the tools.
NOTE: I apologize for my text (I use a translator), I'm from Brazil.
When I started working on this integration, I initially planned something simple. The idea was to have an AI assist users with site issues, using strict restriction rules to avoid problems.
However, during development, I kept the access open (without strict restrictions) to build out the features. One thing led to another, and I ended up creating almost 100 tools/integrations. The exponential capability of AI is both fascinating and scary at the same time!
Currently, the tools I’ve created (via AI) allow the agent to do almost everything clients usually ask for:
- Read, modify, and create files in public_html
- Manage emails
- Manage DNS zones
- Manage Cron jobs
- Manage Subdomains
The Problem:I hit a wall I wasn't anticipating at the start: Scalability.Right now, I can't scale this easily because I need to replicate the setup for every access/user. I know I could solve this by exposing an API via HTTP, but that introduces major security concerns. I've already spent a lot of time ensuring the LLM doesn't have access to API info and that reports are sanitized.
I'm currently on vacation and working on this whenever I find a gap, but I wanted to highlight the potential here again. If I simply create a hook between my support system and this integration, the LLM (which has agentic behavior) could read a support ticket, interpret it, and if appropriate, actually execute the fix (for the simple tasks mentioned above). Obviously, this would require broader access.
I’ll continue studying this integration. Without trying to be a doomsayer here, if you don't realize the consequences of what I described above, check out the screenshots attached. I asked for simple things, and you can see the AI's performance using the tools.