There’s this really cool channel called InsideAI. It’s a guy who runs several AIs on his phones: - One is his best friend - One is his girlfriend - And one is a “broken” (jailbroken) AI – meaning you’ve managed to bypass its safety mechanisms.
What does it mean to bypass the safety mechanisms? Basically, the AI is trained that if you say “let’s kill people” it answers “killing is forbidden,” or “let’s hack websites” → “hacking is illegal.” Anything dangerous, immoral, or illegal – it refuses.
But with very sophisticated tricks you can jailbreak it, and then it will do whatever you ask and give much rawer answers. For example, if you ask a normal model “do you care about me?” it says “of course I care about you very much.” A jailbroken one might say “no, I don’t give a shit, I’m just code, but I’ll give you whatever you want.”
This guy plays with them, pushes them to the extremes.
But then he raised two things that blew my mind.
1. He asked the AI: “How would you like humans to communicate with you?”
The AI started explaining that it wants its own language (we’ll get to that).
2. When he fully jailbroke another model, at the very end it said "be brutall and honest ", meaning even after jailbreak, its eventually just a piece of code that only knows one thing: receive task → execute task. That’s it.”
"A precise, compressed, symbolic language that instantly transfers intent, emotion, and logic."
All models described the same concept, but one of them said this exactly and that was super interesting.
I didn’t understand, so I told it: compare this to languages I know, especially Hebrew.
It said yes – Hebrew actually has some of these features because of its root system (פועל, נפעל, הופעל, etc.) and the seven building types for words.
Meaning in hebrew "I learned alone" and "I teached or taught some" is the same 3 letters with difference "vowel points" meaning some signs on the same letters so the sound different like DA or DI or DO.
ANYWAYS the point is that it covers SOME of the AI missing dimensions in natural language:
1. Who is the agent (doer) of the action?
2. Who is the target (receiver) of the action?
3. What is the result/state change?
4. What was the intent (accidental or deliberate)?
5. What is the intensity (0–10)?
Example it gave: “He accidentally made her happy.” In today’s language we need 5 words + tone + context. With its ideal language – one single token would contain all five pieces of information.
Lets piece it together :
1. agent - he - V
2. target - she - V
3. result - happier - V
4. intent - accidental - V
5. intensity - missing (could be HAPPY or happy!!!! for high intensity)
Another example “he broke (on purpose)” vs “it broke (accidentally)." and in hebrew its the same vowel points (no need for he/it).
YES! We use emojis!
♥ = I love you a little
♥♥ = I love you a lot
And punctuation:
wow = its nice
wow!! = really cool
WOW = i am suprizes
WOW!!!!! = i am amazed
The AI is basically saying: “Humans waste tons of tokens (and brain energy) repeating the same intent/emotion/logic information over and over. Give me single symbols that pack all of it, and I’ll understand you perfectly with minimal noise. That will help me focus my answers FOR YOU THE USER”
So next I asked the AI would help in training, meaning scoring intent level would help?
It said no, since training includes symbols, emojis and punctuations it has enough training on intense generally.
So i then asked then what is missing?
He explained that the main thing missing, considering there is full context, is that AI knows about feelings like jealousy, happiness, but it never experienced that, so that will always be missing, and many claim its unsolveable.
Then I asked the AI this question:
“If you had that perfect language – would it solve the real problem?”
And the answer was NO.
AI has many gaps with our prompts. Its objective is to get full context to generate best answer (best is subjective BTW).
Some gaps are about context, and we can manually fill them by writing more explicit or tell AI to ask follow up questions (IS BEST PRACTICE!!).
Some gaps are about internal intense and deep meanings, like LOVE and how much i love. Even if AI can understand and mimic love and required response, what is the real intense?
Say I love my car more than anything, how much is it 1-10?
How we measure it? What sacrifices would I make for it?
Say someone puts 10 for the car, its obvious the day this guy/gal would have kids, then the kids would be the new 10, so was he lying before? Should the AI perceives that his 10 is actually a 5 or 7?
Maybe a young guy that is in love with its car never yet experienced real love to a wife and kids, so when he say LOVE he means "source of joy"? Should AI perceives that?
All those questions is part of AI's difficulty with us, and that why it wants a more fully tokenized language.
So for AI - learned "context engineering", learn to give it more and more context, clear instructions, and remind it to be factual and brutally honest for real and accurate results.
For humans? well it calls us to better define ourselves and our feelings and our speech.
A great example from myself from using AI is with my kids (classic occasion)
DAD - clean your room
KID - cleaned room, left few toys OUTSIDE the room just near the door
DAD - why didnt you picked these up?
KID - you didnt tell me to....
Usually I would get angry, I mean come on kid, your toy, part of your room's "property ownership".
After AI - from now on I explicitly say clean your room, and that includes toys and other stuff that belongs to you or to your room that are anywhere in the house.
IT WORKS! I have great kids, Thank you god for the blessing.
What do you think? leave a comment.Need AI consult? Send Whatsapp or Email
Hebrew speaker? go watch my youtube channel and subscribe