Rey the chatbot (beta stage) Answered
Hello, esteemed community members! š
I am thrilled to introduce to you an AI tool Iāve been developing on the weekends that leverages the robust knowledge base of Memberstackās Help Center to answer your questions!
Visit Rey the chatbro here:
https://memberstack-ai.vercel.app/
This tool is in its beta stage, and your feedback would be invaluable in making it better. If you encounter an answer that doesnāt quite hit the mark, kindly tap on the š. Conversely, if you find the response particularly helpful or accurate, please click on the š. Your active participation will contribute significantly to the toolās refinement.
At this stage, I recommend using this AI tool if you have a fair understanding of Memberstack, so you can discern if it provides an incorrect response. Additionally, note that all gifs are randomized for a bit of fun.
Your continued support and contribution are much appreciated as we continue this exciting journey of innovation. Together, we can refine this tool to make it a powerful and reliable assistant for our Memberstack community! Excelsior! š¦øāāļø
Comments
9 comments
I was actually doing some R&D for an automotive client on exactly this recently. I don't know if you've seen customGPT, but you can essentially train the AI bot on audio, PDF's, and video files.
Meaning you can also use your video tutorial as training resources. What is truly beautiful about this is that you don't have to prep the training data AT ALL š
Just thought I would share the product that blew my mind as up until recently I was under the impression all LLM's need prepped training data.
@Josh Lopez Would you happen to have set a token limit on the prompts/responses? I am getting occasional truncationĀ
Thank you for that info! I am not adding a token limit.
Did you ask āHow do I verify a members token?ā
I am getting token cut off as well currently. Maybe OpenAI is having issues. Looking into it now.
No I just asked it some basic questions to test if it had short term conversational memory and if there were any form of token limits :)
Iāll test it more proactively once Iām back from dinner :)
I did a few adjustments and I am not getting cutoffs now
Josh Lopez I'm also not getting anymore truncation š
I do have 1 piece of feedback though, when I asked a few questions I notice that the chat doesn't autoscroll to the newest message.
This made me actually think for a second that my message wasnt submitted, turned out I just had to scroll down to see my message and the bots response.
One thing I did with my own AI slackbot was to give it a little bit of an attitude issue by having it randomly select a response from an array before actually running the user prompt.
It probably isn't ideal for your bot as your bot is a lot quicker at responding than mine is. I used these as an alternative to "processing request..."
Here's the 2 lines I used for that, though the Bot I made runs on slack and the ones that access it are my co-founders I probably wouldn't keep most of these in a professional setting.
Josh Lopez I just experienced some truncation again, it was ALMOST done too haha
Unfortunately it is pretty much impossible to get the chatbot to complete the code snippet, it keeps truncating around the same section, and also refuses to start from a specific line of code when asked to start from there due to token count reasons.
One more note, It would be super beneficial to have shift+enter not actually submit a prompt but rather make a line break š
ah good suggestion. I will add it later. For the incompletions that is interesting because i am using the 16k version of gpt3.5 turbo.
Yea it doesnāt honestly seem like a token limit issue as it is extremely random with the response length (including prompt length as thatās how they token count, prompt+response=token count)
Iād wager maybe you arenāt getting complete responses from ChatGPT; tho I have no clue where the root cause comes from
it was a token thing. I just updated it.
i had maxTokens set to -1 which should have worked but I googled everything and found other people with the same issue. I set the maxTokens to 8192 no and it is working for me.
I also made the system prompt smaller
Here is the whole server.js file if you are curious:
Josh Lopez Could you possibly add a short term console.print to see how many tokens each response uses? š
I'd like to do some troubleshooting to see if there is a pattern to the times it truncates š
Josh Lopez Quick question, is the model trained on your data or is it being injected with a pre-prompt?
Hey Josh Lopez,
Would you mind either moving the gif section to the top of the browser or removing it entirely from the mobile versions of the AI Bot?
Thereās barely any screen space for the Text section with the ai bot on mobile devices (my first time using it, was waiting on my flight and didnāt have my laptop on me)
Please sign in to leave a comment.