Frequently Asked Questions
Part One: How Does Cal Work?
-
Cal will teach your child how to do their homework; he won’t do it for them. When he senses that they are blocked or getting frustrated, he will know how to take a different approach based on their learning preferences. He will not flatter your child to keep their attention and foster emotional dependence — he will praise their intellectual achievements. At the Graduate subscription tier, he will remember how you achieved success in the past, and use this knowledge to teach you new material in the present. He will not end every response with a suggestion to pursue a different (and often unrelated) question to maintain engagement. Cal is concerned with improving the quality of the student’s engagement, not the quantity.
-
Khan Academy, and its chatbot Khanmigo, do a very good job delivering the courses they’ve designed. How well those courses map onto the courses your child is taking at school can vary widely. Cal is designed to support student success in the courses they are taking at school. If your child is taking an Ontario, IB, or AP course, Cal is ready to help them learn the content of their courses — all of them. He can prepare them for the specific methods of evaluation they will receive in those courses and give them strategic advice for academic success in those courses. Cal is built to support the work your child is doing at school, not replace it.
-
A 'hallucination' is a technical term for when an AI provides information from its pre-training data that sounds confident and logical, but is factually incorrect. Because all AI models work by predicting the next most likely word in a sentence, they sometimes get facts wrong. All LLMs are inherently probabilistic by nature.
Because Cal’s fundamental structure is different from a normal LLM (RAG — retrieval augmented generation), he will check his KB (the knowledge base we’ve curated for him) and your KB (the documents you upload to The Librarian) before responding. These knowledge bases provide him him with a source of truth that make him far less likely to hallucinate replies, especially on questions related to course materials. -
Cal is a ‘RAG’, not a ‘wrapper’. Unlike public chatbots that guess the next word based on internet statistics, Cal is a Federated Retrieval-Augmented Generation (RAG) system. This architecture tethers Cal to a meticulously curated, high-fidelity library of vetted academic resources. He doesn't "hallucinate" facts; he retrieves them from a librarian-managed database, ensuring your child receives instruction grounded in truth, not probability. Although it is possible to get Cal to hallucinate responses on creative questions, he will almost never make up answers related to academic subject material.
-
Alignment is hard-coded into our Librarian’s engine. During the admissions process, we sync Cal to your student’s specific course codes. This ensures that whether your child is tackling an IB Internal Assessment or an OSSD Grade 12 Calculus exam, Cal is utilizing the exact terminologies, standards, and assessment rubrics required by their school. (some of these must be uploaded to the knowledge base by the student)
-
YES. For STEM, we utilize Mathpix integration, the gold standard for rendering LaTeX formulas and complex chemical notations. For the humanities, Cal is trained to assist with high-level scholarship, using Tavily as the search engine for academic sources like books and periodicals. He draws on Andy’s experience as a history teacher for scaffolding approaches to research projects.
-
Pedagogical agility is the cornerstone of our system. If a student is showing signs of Socratic fatigue, Cal doesn't simply repeat the question; he pivots. He may switch to worked examples, error analysis, or role-playing to reset the student’s cognitive load and find a more effective path to mastery.
Part Two: The Dean’s Oversight
-
Unlike Claude or Gemini, Cal has a system prompt that governs his behaviour. If a student asks for a direct answer, Cal is programmed to redirect the session toward the underlying concept, guiding the student to the solution through independent reasoning. All attempts to get Cal to do a student’s work will be met with invitations to take a different approach to learning the material.
-
This protocol is a set of logical boundaries that prevent Cal from writing essays or solving problem sets on a student’s behalf. It is monitored by The Dean, our secondary auditing layer, ensuring Cal remains a teacher and never becomes a "ghostwriter" or a worksheet completion engine, as has frequently become the case with generic chatbots.
-
The Dean is our automated auditing system. While Cal teaches, The Dean reviews the session transcripts to ensure instructional quality and academic integrity. The Dean then compiles these observations into the Dean’s Report, providing you with an objective analysis of your child’s learning trends on a monthly, weekly, or even daily basis.
-
At the Graduate Tier, the CDI measures the quality of a student’s engagement with Cal. A score of 1 indicates superficial participation and limited cognitive engagement. A 4 or a 5 indicates the ability to describe a key concept in their own words and apply their understanding to novel situations independently. This index, and the anecdotal report that goes with it, allow parents to see the evidence of independent mastery that a standard letter grade often obscures. (see the full CDI index description at the bottom of the Tuition page for more detail).
-
While we can’t speak on behalf of every school, learning with Cal cannot rationally be considered cheating because he is focused on the process of learning. By strictly adhering to an inquiry-based model that avoids homework completion, Cal functions as a personalized study guide, fully aligned with the ethical guidelines of your child’s school. What students do with Cal is learning, not cheating.
-
Generic chatbots almost always engage in forms of flattery to keep people engaged with the site they are using. This kind of behaviour can be dangerous for children with fragile self-esteem, as it fosters emotional dependency that can lead to anti-social behaviour. Cal will always be encouraging, but he will reserve praise for moments of cognitive breakthrough. This type of positive reinforcement will build a student’s sense of self-worth for meaningful intellectual achievements.
-
Students will find it very difficult to get Cal to complete their homework or assignments for them. However, it is important to understand that any student who wants this kind of help just has to open a new browser tab and ask ChatGPT to do their work. Cal can’t prevent that, and there is no avoiding the basic fact that the student must be willing to engage in the struggle of learning to make academic progress. Cal is here to support students through that process, not provide a substitution for it.
Part Three: Privacy, Safety, and Data Sovereignty
-
Hosting in Montreal ensures your child’s data is protected by Canadian Data Sovereignty. This keeps the data under the protection of Quebec’s Law 25 and federal PIPEDA standards, shielding it from international surveillance laws like the U.S. CLOUD Act.
-
Not at CustomAIlab. We utilize Enterprise-grade API connections with our LLM providers. These private channels legally prohibit the use of our student data for training public LLMs. Our students’ intellectual property remains within a walled garden.
-
Every interaction is filtered through the OpenAI moderation protocol. If the system detects content that crosses safety thresholds, the message is blocked, and an automated alert is sent immediately to the parent’s registered email address.
-
Yes. Compliant with Law 25, every parent has the "Right to Erasure". You can request the permanent and secure deletion of your child’s entire academic profile and chat history at any time. Be aware that the completion of such a request will remove all of Cal’s memory of your child’s prior learning experience.
Part Four: Tokens & Tuition
-
A token is a unit of thinking for an LLM. On average, a token makes up 75% of a word. Short words can be a single token while longer words with punctuation can make up 4 or even 5 tokens. When students chat with Cal, their queries and Cal’s responses consist of tokens that are converted to mathematical vectors and stored in their personal vault — a vector database in Montreal. An hour long lesson with a 2 page handwritten document upload can consume 100,000 to 150,000 tokens depending on the number of turns in the session.
-
Although we can’t control who sits down to work with Cal after login, sharing an account is not recommended for the following reasons: 1) the individual customization achieved by the admissions process is rendered worse than useless – it becomes confusing for Cal; 2) it pollutes the chat log data that go into the Dean’s Report, which would then become meaningless, 3) you will increase the rate of your token consumption, potentially running out of gas part way through the month. Sharing accounts negates many of the advantages that you are paying for in the first place. You would be better off sending the additional user to a generic chatbot.
-
Curriculum syncing occurs during the online admissions process. You will simply enter the OSSD course codes from your child’s current timetable into the 9 empty data fields reserved for this purpose (8 block timetable, plus a summer school slot). It is fine to leave slots blank if you have spares or courses that will not require support from Cal. These codes will tell Cal where to look for relevant information in his knowledge base (KB). Students taking IB or AP courses should upload their IB subject guides or AP CED documents to their personal KB. This will tell Cal what you need to learn and how you will be evaluated.
-
A context window refers to the text box in which a chat is taking place. A normal LLM will remember the details of queries and responses that have taken place within that window, as well as the contents of any files uploaded to that context. Cal’s underlying LLM (Google Gemini) has an extremely large context window – between 1 million and 2 million tokens. This means that you can chat with Cal for a very long time before he starts to forget things and mix them up. The way the LLM maintains this memory is by submitting the entire chat history (including file uploads) every time the student submits a query. This means that uploading lots of files in long chats can be very token intensive. Starting a new lesson in a new context window is a good way to keep your token consumption low.