Nom de l'auteur : Admin

Anglais

Nano Banane 3 Aller sans restriction: Pourquoi Higgsfield's Surprise Release Matters pour les thérapeutes, les éducateurs et la pratique conduite de l'IA

Every once in a while, the AI world drops a surprise that makes everyone sit up. This week, it came from Higgsfield, the company behind the Nano Banana video-generation models — known for producing some of the cleanest, most realistic AI videos on the market, unlocking capabilities that were previously behind expensive enterprise plans. For most people, this news is exciting. For therapists, educators, researchers, and content creators working in human development and rehabilitation, it’s transformative. Nano Banana 3 and Nano Banana Pro are part of Higgsfield’s next-generation video models. They were originally designed for creators and studios, but the quality, speed, and realism they deliver have caught the attention of professionals across healthcare, education, and the wider neurodevelopmental field. These models aren’t basic talking-head generators. They produce dynamic, context-aware video scenes, expressive human animations, and rapid-turnaround educational clips using only text prompts. So when Higgsfield temporarily removed restrictions, it wasn’t just a gift to filmmakers — it was an invitation to explore what high-quality video generation could look like in therapeutic and educational practice. What Exactly Is Nano Banana 3? Nano Banana 3 is Higgsfield’s lightweight, fast, and impressively realistic video model. It can generate short, smooth, expressive videos with better motion stability and less distortion compared to the previous Nano Banana versions. Nano Banana Pro — which people now have temporary free access to — adds even more: For therapists, teachers, and clinicians, this means the ability to instantly create intervention videos, role-play models, visual supports, psychoeducation clips, and demonstration scenes that would normally take hours to film. Why This Release Matters for Practice I’ll be honest: when video-generation models first appeared, I didn’t see them as therapy tools. But the Nano Banana models changed my mind. Their realism and flexibility fit directly into several needs we see every day: modeling communication, breaking down routines, illustrating social expectations, or simply making content engaging enough for learners who require visual novelty or repetition. This unrestricted release removes the barrier to experimentation. For three days, any therapist or educator can test Nano Banana Pro and actually see how AI-generated video could support their workflows without financial commitment or technical friction. For example: What makes Nano Banana particularly interesting is the emotional realism. Characters move with natural pacing, eye gaze, and affect matching — features extremely valuable in social-communication interventions. From My Perspective: Why You Should Try It When tools like this become unrestricted, even briefly, we get a rare chance to explore what the future of intervention might feel like. Not theoretical, not conceptual — real, hands-on experimentation. I see huge potential in: 1. Parent CoachingQuickly making custom videos that model strategies the parent can repeat at home. 2. Social-Emotional LearningCreating emotionally accurate scenes for teens with ASD, ADHD, or anxiety. 3. AAC & CommunicationDemonstrating key phrases or modeled scripts in naturalistic situations. 4. Motor LearningShowing task sequences with slowed motion or highlighted joints. 5. Research ApplicationsGenerating standardized, high-quality visual stimuli for cognitive or behavioral studies. A tool like this doesn’t replace therapy — but it extends it. It fills the gap between sessions, helps personalize intervention, and gives families meaningful resources that feel engaging, culturally adaptable, and accessible. A Few Cautions Of course, video generation is not without concerns. We still need clear boundaries around: But when used appropriately, tools like Nano Banana can help scale interventions, enrich learning, and support environments where visual modeling is a core instructional method. A Moment to Explore, Not to Rush Through Higgsfield opening Nano Banana Pro to the public is bold. It’s also a glimpse of how accessible high-end AI creation may become. For many professionals, these three days are an opportunity to test workflows that could eventually become standard practice — from creating personalized therapy materials to building research stimuli or educational modules. Whether you use the full three days or just a few minutes, it’s worth stepping in. Not because AI will replace human teaching or therapeutic presence — but because it can extend it in powerful, flexible, and creative ways.

Anglais

Gemini 3: Google le modèle le plus capable Encore — Et ce que cela signifie pour la thérapie, l'éducation et la pratique basée sur le cerveau

Every year, AI pushes a little further into territory we once believed required exclusively human cognition: nuance, empathy, reasoning, and adaptability. But with Google’s release of Gemini 3, something feels different. This new generation isn’t just another model update—it’s a shift toward AI that reasons more coherently, communicates more naturally, and integrates into clinical, educational, and research ecosystems with unprecedented fluency. To many of us working in the therapeutic world, Gemini 3 arrives at a time when we are juggling increasing caseloads, administrative pressure, and the need for more adaptive tools that support—not replace—our expertise. And surprisingly, this model feels like a thoughtful response to that reality. What Gemini 3 Actually Is — Beyond the Marketing Google positions Gemini 3 as its most advanced multimodal model: text, audio, images, video, graphs, code, and real-time interactions all feed into one system. But what stands out is its improved reasoning consistency. Earlier models, including Gemini 1.5 and 2.0, impressed on benchmarks but sometimes struggled in long, structured tasks or therapeutic-style communication. Gemini 3 shows noticeable refinement. It handles complex, layered prompts with fewer errors. It sustains long conversations without losing context. And perhaps most relevant to us—it is more sensitive to tone and intention. When you ask for a parent-friendly explanation of auditory processing disorder, or a trauma-informed classroom strategy, or a neutral summary of recent research, its responses feel less generic and more aligned with clinical communication standards. Google also introduced stronger multilingual performance, something particularly important for our multilingual therapy and school settings. Gemini 3 processes Arabic, French, and South Asian languages with far greater stability than earlier iterations. For families and educators working in diverse linguistic communities, this matters. How It Could Support Real Practice — From Our Perspective I’ll be honest: when AI companies announce new models, my first reaction is usually cautious curiosity. “Show me how this helps in a real therapy room.” With Gemini 3, I’m beginning to see practical pathways. In our therapeutic and educational contexts, the model’s improvements could enhance practice in several ways: 1. More Accurate Support for Clinical Writing Gemini 3 feels significantly more reliable in structuring reports, generating progress summaries, and translating clinical findings into clear, digestible language. For many clinicians, writing takes as much time as therapy itself. A model that reduces cognitive load without compromising accuracy genuinely matters. 2. Better Tools for Psychoeducation One of its strengths is tone adaptability. You can request information written for a parent with limited health literacy, a teacher seeking intervention strategies, or a teenager trying to understand their diagnosis. The explanations sound more natural, less robotic, and more respectful—qualities essential in psychoeducation. 3. Enhanced Use in Research and Evidence Synthesis The model’s ability to handle long documents and produce structured, conceptually accurate summaries makes literature reviews, protocol design, and thematic analysis far more manageable. For students, researchers, and clinicians engaged in EBP, this can be a real asset. 4. A Potential Co-Facilitator for Learning & Rehabilitation Gemini 3 can generate adaptive tasks, scaffold instructions, model social scripts, or create visual-supported routines. While no AI can replace human therapeutic presence, it can extend learning between sessions and increase engagement—especially for children, neurodivergent learners, and individuals needing high repetition. 5. More Reliable Multimodal Reasoning Therapists often rely on materials—videos, images, diagrams, routines—to support learning. Gemini 3’s improved image analysis and video interpretation could help clinicians create resources faster and with greater clarity. But Here’s the Real Question: Should We Be Excited or Cautious? As therapists, we always stand with one foot in innovation and one firmly in safety. With Gemini 3, that stance remains essential. The excitement comes from its ability to improve access, reduce overwhelm, and support families who need more than a once-a-week session. But caution is necessary because the more “human-like” the model becomes, the easier it is for users to over-trust its authority. Gemini 3 can sound empathetic—but it does not understand emotions. It can synthesize research—but it cannot replace clinical judgment. The path forward, I believe, is intentional integration. We use Gemini 3 to enhance—not overshadow—our expertise. We let it support the labor-intensive parts of practice while ensuring interpretation and decision-making remain firmly human. And we maintain transparency with our clients, students, and families about where AI fits into our work. Why Gemini 3 Matters Now We are entering a period where AI tools are no longer optional—they’re becoming part of the professional landscape. What differentiates Gemini 3 is not its novelty, but its maturity. It offers enough stability, depth, and flexibility to genuinely support practice, without the erratic unpredictability that marked earlier generations. For therapists, special educators, and researchers, Gemini 3 represents an opportunity to reclaim time, enhance personalization, and deepen our capacity to deliver care. But it also invites us to reflect thoughtfully on our role in this changing ecosystem: to lead the conversation on ethical integration, to train the next generation in AI literacy, and to ensure technology remains a tool of empowerment rather than replacement. The future of therapy is still human-centered. Gemini 3 simply gives us more room to keep it that way.

Anglais

C'est pas vrai. L'augmentation de la vidéo humaine d'IA et ce que cela signifie pour la thérapie, l'éducation et l'engagement des clients

AI-generated video has evolved rapidly, but Tavus.io represents one of the most significant leaps forward — not just for marketing or content creation, but for human-centered practice. Tavus blends generative video with conversational AI, allowing users to create lifelike “AI Humans” that look, speak, and respond like a real person in real time. For those working in therapy, rehabilitation, special education, or health research, this technology raises fascinating possibilities for connection, continuity, and support. Tavus allows anyone to create a digital version of themselves through a short video recording. Using advanced video synthesis, voice replication, and a real-time conversational engine, the AI Human can then deliver personalized information, respond to questions, and maintain natural back-and-forth dialogue. What makes Tavus stand out is how convincingly human these interactions feel — lip movement, tone, micro-expressions, pauses, and even warmth are remarkably well replicated. This is not a scripted avatar reading from a prompt; it is a dynamic, adaptive system that can hold a conversation. One of Tavus’s most compelling aspects is its emotional presence. Many AI tools can generate text or voice, but Tavus adds the visual and relational layer that therapists and educators often rely on. For a child who struggles with attention, for example, seeing a familiar face explain a task may be more engaging than audio instructions. For families who need consistent psychoeducation, a therapist’s AI Human could walk them through routines, home-practice exercises, or behavior strategies between sessions. The technology does not replace real therapeutic interaction — but it can extend the sense of continuity and personalize support beyond the scheduled hour. The platform also sits at an interesting intersection between accessibility and scalability. Many clinicians struggle with the time demands of creating individualized resources, recording educational videos, or maintaining consistent follow-up. With Tavus, a digital replica could produce tailored reminders, explain therapy steps, or offer instructional modeling without requiring clinicians to film new content every time. For special educators, this could mean creating personalized visual instructions for students who depend on repetition and predictability. For researchers, Tavus opens the door to standardized yet naturalistic video administration in cognitive or behavioral studies, improving consistency across participants. Still, these new capabilities demand careful consideration. Cloning a clinician’s face and voice brings ethical questions around consent, identity, and professional boundaries. Researchers and clinicians must be transparent about how their AI Human is used, who interacts with it, and what data is collected. There are also relational concerns. If a client forms attachment to a therapist’s AI replica, how does that affect the therapeutic alliance? How do we prevent misunderstandings about the difference between a human clinician and a digital representation? The emotional realism that makes Tavus promising is the same realism that requires thoughtful guardrails. From a research perspective, Tavus’s real-time conversational API is particularly noteworthy. Developers can train the AI Human on specific data — therapeutic principles, educational content, or institutional guidelines — and embed it into apps or web platforms. This could lead to new ways of delivering self-guided interventions, early identification of needs, or structured conversational practice for individuals with social communication challenges. The ability to scale personalized video support across thousands of learners or clients is unprecedented. Yet Tavus’s potential is not only in delivering information, but in reinforcing the human behind the message. The system captures the familiarity of a clinician’s face, voice, and demeanor — something text-based AI cannot do. Used responsibly, this could strengthen engagement, increase retention in treatment programs, and support individuals who need more frequent visual prompting or reassurance. Tavus is not a replacement for therapy. It is a new modality of communication — one that blends human presence with AI scalability. For many clinicians and educators, the question is no longer “Is this coming?” but “How should we use it well?” As AI video continues to evolve, Tavus offers a glimpse of a future where digital tools feel less mechanical and more relational, giving professionals new ways to extend care, reinforce learning, and bridge gaps outside the therapy room. Suggested ReadingExplore Tavus.io: https://www.tavus.ioVEED x Tavus Partnership Overview: https://www.veed.io/learn/veed-and-tavus-partnershipTavus API Documentation: https://docs.tavus.io/sections/video/overview

Anglais

ChatGPT 5.1: L'IA la plus humaine encore — Et ce que cela signifie pour notre travail en thérapie, en éducation et en recherche

If you’ve been using ChatGPT for a while, you may have noticed something this month — it suddenly feels different. Warmer. Sharper. A bit more… human. That’s not by accident. On November 12, 2025, OpenAI officially rolled out ChatGPT 5.1, and this update quietly marks one of the biggest shifts in how we’ll work with AI in clinical, educational, and research settings. I’ve spent the past week experimenting with it across therapy planning, academic analysis, and content design. What struck me wasn’t just the improved accuracy — it was the way the AI “holds” a conversation now. It feels less like querying a machine and more like collaborating with a knowledgeable colleague who adapts their tone and depth depending on what you need. This isn’t hype — it’s architecture. And it’s worth understanding what changed, because these changes matter deeply for practice. A New Kind of AI: Adaptive, Expressive, and Surprisingly Human The GPT-5.1 update introduces two new model behaviors that genuinely shift its usefulness: 1. GPT-5.1 Instant — the “human-sounding” one This version focuses on tone, warmth, responsiveness, and emotional contour. It’s designed to carry natural dialogue without feeling rigid or scripted. As OpenAI describes, it’s built to “feel more intuitive and expressive.” 2. GPT-5.1 Thinking — the analytical one This variant does something no GPT model has done before: it thinks longer when it needs to, and responds more quickly when it doesn’t.This is huge. It means the model adjusts its cognitive workload similar to how we do — slowing down for complex reasoning, speeding up for routine tasks. OpenAI confirmed these changes improve performance across logic, math, coding, and multi-step reasoning tasks. That adaptability makes GPT-5.1 closer to a genuine cognitive partner rather than a question-answer tool. Tone Control: The Feature That Changes Everything GPT-5.1 introduces eight personality presets (Professional, Friendly, Candid, Quirky, Nerdy, Cynical, Efficient, and Default), plus experimental sliders that let you control: For clinicians and researchers, this means we can now shape AI output according to purpose:a psychoeducation script for a parent meeting needs a different “voice” than a research synthesis or a therapy report. This level of control may be one of the most important steps toward making AI genuinely usable in sensitive, human-centered fields. Where GPT-5.1 Actually Changes Practice After testing it across multiple settings, three shifts stand out to me: 1. Therapy Planning Feels More Collaborative GPT-5.1 Instant produces conversational prompts, social stories, cognitive-behavioral scripts, and session ideas in a tone that feels usable with real clients. Not clinical. Not robotic. Not formal.Just human enough. 2. Academic and Clinical Writing Becomes Faster and Cleaner The Thinking model handles literature synthesis more coherently, drills down into conceptual frameworks, and maintains clarity even in longer analytical passages.As someone juggling multiple academic roles, this is a dramatic improvement. 3. Research Navigation Becomes Less Overwhelming GPT-5.1 is noticeably better at connecting theories, comparing methodologies, and explaining statistical models. It’s not replacing critical thinking — but it absolutely accelerates it. This matters because research literacy is increasingly becoming a prerequisite for ethical practice. Not Everything Is Perfect — And That’s Important to Say With more expressive language, ChatGPT 5.1 sometimes leans into over-articulation. Responses can be too polished or too long. That may sound like a small complaint, but in therapy or medical contexts, excess wording can dilute precision. There’s also the bigger ethical reality:the more human these models feel, the easier it is to forget that they are not human. GPT-5.1 may sound empathetic, but it does not experience empathy.It may sound thoughtful, but it does not truly understand.It may draft clinical notes beautifully, but it cannot replace judgment. In other words: GPT-5.1 is a powerful partner — as long as the human stays in charge. Where We Go From Here What I find most encouraging is that GPT-5.1 feels like a model designed with professionals in mind. It respects tone. It respects nuance. It understands that not all tasks are equal — some require speed, others require depth. For those of us working in therapy, education, psychology, neuroscience, and research, this update provides something we’ve needed for a long time: A tool that can meet us where we are, adapt to what we need, and amplify — not replace — our expertise. ChatGPT 5.1 doesn’t just make AI stronger.It makes it more usable.And that’s a turning point. Sources

Anglais

L'IA pour l'inclusion: ce qui fonctionne maintenant pour les apprenants ayant des besoins éducatifs spéciaux

Every so often a research paper lands that feels less like a forecast and more like a field guide. The OECD’s new working paper on AI for students with special education needs is exactly that—practical, grounded, and refreshingly clear about what helps right now. If you care about brain‑friendly learning, this is good news: we’re moving beyond shiny demos into tools that lower barriers in everyday classrooms, therapy rooms, and homes. The paper’s central idea is simple enough to fit on a sticky note: inclusion first, AI second. Instead of asking “Where can we push AI?” the authors ask “Where do learners get stuck—and how can AI help remove that barrier?” That’s the spirit of Universal Design for Learning (UDL): give learners multiple ways to engage with content, multiple ways to understand it, and multiple ways to show what they know. AI becomes the backstage crew, not the headliner—preparing captions, adapting tasks, translating atypical speech, and nudging practice toward the just‑right challenge level. What does this look like in real life? Picture a student whose handwriting slows down everything. Traditional practice can feel like running in sand—lots of effort, little forward motion. Newer, tablet‑based coaches analyze the micro‑skills we rarely see with the naked eye: spacing, pressure, pen lifts, letter formation. Instead of a generic worksheet, the learner gets bite‑sized, game‑like tasks that target the exact stumbling blocks—then cycles back into real classroom writing. Teachers get clearer signals too, so support moves from hunches to evidence. Now think about dyslexia. Screening has always walked a tightrope: catch risk early without labeling too fast. The paper highlights tools that combine linguistics with machine learning to spot patterns and then deliver thousands of tiny, personalized exercises. The win isn’t just early identification; it’s keeping motivation intact. Short, achievable practice turns improvement into a string of small wins, which is catnip for the brain’s reward system. Some of the most heartening progress shows up in communication. If you’ve ever watched a child with atypical speech be understood—really understood—by a device that has learned their unique patterns, you know it feels like a door opening. Fine‑tuned models now translate highly individual speech into clear text or voice in real time. Families tell researchers that daily life gets lighter: ordering in a café, answering a classmate, telling a joke at the dinner table. The paper is careful not to overclaim, but the early signals are powerful. Social communication for autistic learners is getting smarter, too. On‑screen or embodied agents can practice turn‑taking, joint attention, and emotion reading in a space that’s structured and safe. Educators can tweak prompts and difficulty from a dashboard, so sessions flex with energy levels and goals. The magic here isn’t that a robot “teaches” better than a human; it’s that practice becomes repeatable, low‑stakes, and tuned to the moment—then transferred back to real interactions. Not all wins are flashy. Converting static PDFs into accessible, multimodal textbooks sounds mundane until you watch it unlock a unit for an entire class. Text‑to‑speech, captions, alt‑text, adjustable typography, and cleaner layouts benefit students with specific needs—and quietly help everyone else. This is UDL’s ripple effect: when we design for variability, the floor rises for all learners. Under the hood, personalization is getting sharper. Instead of treating “math” or “reading” as monoliths, systems map skills like networks. If multiplication is shaky because repeated addition never solidified, the system notices and steps back to build the missing bridge. Learners feel less frustration because the work finally matches their readiness. Teachers feel less guesswork because the analytics point to actionable scaffolds, not vague “struggling” labels. So where’s the catch? The paper is clear: many tools still need larger, longer, and more diverse trials. Evidence is growing, not finished. We should celebrate promising results—and still measure transfer to real tasks, not just in‑app scores. And we can’t ignore the guardrails. Special education involves some of the most sensitive data there is: voice, video, eye‑gaze, biometrics. Privacy can’t be an afterthought. Favor on‑device processing where possible, collect only what you need, keep it for as short a time as you can, and use consent language that families actually understand. Bias is another live wire. If speech models don’t learn from a wide range of accents, ages, and disability profiles, they’ll miss the very learners who need them most. And yes, there’s an environmental bill for heavy AI. Right‑sized models, greener compute, and sensible usage policies belong in the conversation. What should teachers and therapists do with all this tomorrow morning? Start with the barrier, not the tool. Identify the friction—copying from the board, decoding dense text, being understood—and pilot something that targets that friction for eight to twelve weeks. Keep it humble and measurable: a pre/post on intelligibility, words per minute, error patterns, or on‑task time tells a better story than “students liked it.” Treat multimodality as default, not accommodation: captions on, text‑to‑speech available, alternative response modes open. And capture whether gains show up in real classwork. If progress lives only inside an app, it’s not the progress you want. For school leaders, the paper reads like a procurement sanity check. Ask vendors for research summaries you can actually read, not just glossy claims. Demand accessibility as a feature, not an add‑on—screen reader support, captions, switch access. Check interoperability so your data doesn’t get stuck. Bake privacy into contracts: where data lives, how long it stays, how deletion works. Push for localization and equity—bilingual interfaces, dialect sensitivity, culturally relevant content—because a tool that isn’t understood won’t be used. And if a vendor can talk credibly about energy and efficiency, that’s a green flag. Bottom line: AI isn’t replacing the art of teaching or therapy. It’s removing friction so strengths surface sooner. It’s turning opaque struggles into visible, coachable micro‑skills. It’s helping voices be heard and ideas be expressed. If we keep learners and families at the center, measure what matters, and mind the guardrails, this isn’t hype—it’s momentum we can build on. Read the full OECD paper: https://www.oecd.org/content/dam/oecd/en/publications/reports/2025/09/leveraging-artificial-intelligence-to-support-students-with-special-education-needs_ebc80fc8/1e3dffa9-en.pdf

Anglais

Cliquez sur Moins, pensez plus: Comment Atlas change la journée

ChatGPT Atlas is the kind of upgrade you only appreciate after a single workday with it. Instead of juggling a separate ChatGPT tab, a dozen research pages, and that half‑written email, Atlas pulls the assistant into the browser itself so you can read, ask, draft, and even delegate steps without breaking focus. OpenAI introduced it on October 21, 2025, as a macOS browser available worldwide for Free, Plus, Pro, and Go users, with Agent mode in preview for Plus, Pro, and Business and admin‑enabled options for Enterprise and Edu. Windows, iOS, and Android are on the way, but the story starts here: a browser that understands the page you’re on and can help you act on it. If you’ve ever copied a paragraph into ChatGPT just to get a plainer explanation, you’ll like the Ask ChatGPT sidebar. It rides alongside whatever you’re viewing, so you can highlight a passage and ask for an explanation, a summary for families, or a quick draft to paste into your notes—without leaving the page. You can type or talk, and the conversation stays anchored to the page in view. For writing, Atlas adds an “Edit with ChatGPT” cursor directly in web text fields: select text, invoke the cursor, and request a revision or dictate new content in place. It feels less like consulting a tool and more like having a helpful colleague in the margin. Where things get interesting is Agent mode. When you switch it on, ChatGPT can take actions in your current browsing session: open tabs, navigate, click, and carry out multi‑step flows you describe. Planning a workshop? Ask it to gather venue options that match your accessibility checklist, compare prices and policies, and draft a short email to the top two. Wrangling admin tasks? Let it pre‑fill routine forms and stop for your review before submission. You set the guardrails—from preferred sources to required approval checkpoints—and you can even run the agent “logged out” to keep it away from signed‑in sites unless you explicitly allow access. It’s a natural hand‑off: you start the task, the agent continues, and it reports back in the panel as it goes. Because this is a browser, privacy and control matter more than features. Atlas ships with training opt‑outs by default: OpenAI does not use what you browse to train models unless you turn on “Include web browsing” in Data controls. Browser memories—the feature that lets ChatGPT remember high‑level facts and preferences from your recent pages—are strictly optional, viewable in Settings, and deletable; deleting your browsing history also removes associated browser memories. Business and Enterprise content is excluded from training, and admins can decide whether Browser memories are available at all. If you want quality signals to improve browsing and search but not training, Atlas separates that diagnostic toggle from the model‑training switch so you can keep one off and the other on. Setup is quick. Download the macOS app, sign in with your ChatGPT account, and import bookmarks, passwords, and history from Chrome so you don’t start from zero. You can make Atlas your default in one click, and there’s a small, time‑limited rate‑limit boost for new default‑browser users to smooth the transition. It runs on Apple silicon Macs with macOS 12 Monterey or later, which covers most modern school or clinic machines. For a brain‑friendly practice—whether you’re supporting learners, coaching adults, or coordinating therapy—Atlas changes the cadence of your day. Research no longer requires the swivel‑chair routine: open a guideline or policy page, ask the sidebar to extract the eligibility details or accommodations, and keep reading as it compiles what matters. When policies conflict, have it surface the differences and the exact language to discuss with your team. Drafting becomes lighter, too. Need a parent update in Arabic and English? Keep your school page open, ask Atlas to produce a two‑column explainer grounded in that page, and paste it into your newsletter or WhatsApp note. Because the chat sits beside the source, you’re less likely to lose context—and more likely to keep citations tidy. The benefits are practical in Qatar and across MENA, where bilingual communication and time‑to‑action often make or break a plan. Atlas respects your existing logins and runs locally on macOS, which means it adapts to your regional sites and Arabic/English workflows without new portals. Start small: use the sidebar for comprehension scaffolds during lessons, quick plain‑language summaries for families, or bilingual glossaries on the fly. When your team is comfortable, try Agent mode for repeatable tasks like collecting venue policies, drafting vendor comparisons, or preparing term‑start checklists—while keeping the agent in logged‑out mode if you don’t want it near signed‑in records. The point isn’t to automate judgment; it’s to offload the clicks so you can spend attention where it counts. Safety is a shared responsibility, and OpenAI is frank that agentic browsing carries risk. Atlas limits what the agent can do—it can’t run code in the browser, install extensions, or reach into your file system—and it pauses on certain sensitive sites. But the company also warns about prompt‑injection attacks hidden in webpages that could try to steer an agent off course. The practical takeaway for teams is simple: monitor agent runs, prefer logged‑out mode for anything sensitive, and use explicit approval checkpoints. As with any new tool, start on low‑stakes workflows, measure outcomes like minutes saved or error rates, and scale intentionally. Under the hood, Atlas also modernizes search and results. A new‑tab experience blends a chat answer with tabs for links, images, videos, and news, so you can go source‑first when you want to validate or dive deeper. That’s useful for educators and clinicians who need traceable sources for reports: ask for a synthesis, then flip to the links view to gather citations you can verify. And because it’s still a browser, your usual web apps, calendars, and SIS/EMR portals stay right where they are—Atlas just gives you a knowledgeable helper at elbow height. If you publish a newsletter like Happy Brain Training, Atlas earns its keep quickly.

Anglais

Contrôles parentaux et utilisation de l'IA chez les adolescents: ce que les éducateurs et les thérapeutes doivent savoir

L'intelligence artificielle est maintenant profondément tissée dans la vie numérique des adolescents, et les récents développements de Meta Platforms illustrent comment cela suscite à la fois excitation et inquiétude. En octobre 2025, Meta a annoncé de nouvelles fonctionnalités de contrôle parental conçues pour aborder la façon dont les adolescents interagissent avec les chatbots AI sur les plateformes AI Instagram, Messenger et Meta. Ces nouveaux paramètres permettront aux parents de désactiver les conversations individuelles avec des caractères AI, de bloquer entièrement des caractères AI spécifiques et d'obtenir des informations sur les sujets plus larges que leurs adolescents discutent avec l'IA. Pour les thérapeutes et les éducateurs spéciaux, ce type de changement a une pertinence directe. Les adolescents utilisent des chatbots AI non seulement comme des applications de nouveauté, mais comme compagnons quotidiens, confidents et partenaires conversationnels. Selon certaines recherches, plus de 70 % des adolescents ont utilisé des compagnons d'IA et plus de la moitié s'engagent régulièrement. Cela signifie que lorsque nous parlons de soutien social et émotionnel des adolescents, la dimension numérique fait de plus en plus partie du contexte. Pourquoi est-ce important ? Tout d'abord, si un adolescent est en train de former un modèle de travail à travers les défis, les soucis ou la communication sociale via un chatbot AI, cela soulève des questions importantes : quel genre de messages sont renforcés ? Ces facteurs augmentent-ils l'autonomie, réduisent-ils l'interaction entre les pairs ou les adultes ou renforcent-ils des modèles malsains d'isolement ou de dépendance? Par exemple, si un étudiant anxieux préfère des séances avec un chatbot à des discussions dirigées par des adultes, nous devons nous demander si cette substitution est utile, neutre ou potentiellement problématique. Deuxièmement, les éducateurs et les thérapeutes sont bien placés pour intervenir proacti

Anglais

L'IA inclusive dans l'éducation : une nouvelle frontière pour les thérapeutes et les éducateurs spéciaux

La promesse de l'intelligence artificielle dans le domaine de l'éducation s'est accrue rapidement, et un nouveau document de travail de l'Organisation de coopération et de développement économiques (OCDE) intitulé "Leveraging Artificial Intelligence to Support Students with Special Education Needs" fournit un aperçu réfléchi de la façon dont l'IA peut soutenir les apprenants—mais avec de grandes réserves. Le rapport fait valoir que les outils d'IA qui adaptent l'enseignement, génèrent des contenus accessibles et fournissent un soutien adapté aux apprenants individuels ont un potentiel réel en matière d'éducation spéciale, de thérapie et de classes inclusives. Par exemple, un système d'IA pourrait générer des passages de lecture simplifiés pour les étudiants atteints de dyslexie, créer des supports visuels ou des échafaudages pour les étudiants souffrant de retards linguistiques, ou adapter le rythme et le format pour les étudiants ayant des difficultés d'attention ou de traitement. Pour les thérapeutes et les éducateurs spéciaux, cela signifie des occasions d'innover. Au lieu de créer manuellement plusieurs versions d'un script de leçon ou de communication, l'IA générative peut vous soutenir en produisant rapidement du matériel varié et adapté. Un orthophoniste travaillant avec des enfants bilingues pourrait utiliser un outil d'IA pour produire des matériaux échafaudés dans toutes les langues; un ergothérapeute pourrait générer des instructions tactiles ou des supports interactifs qui correspondent au profil d'un étudiant. Toutefois, le rapport de l'OCDE souligne également que l'équité, l'accès et la conception centrée sur l'homme doivent accompagner ces possibilités. Les outils d'IA reposent souvent sur des données formées à l'intention des apprenants typiques, et non sur ceux qui ont des profils de communication ou des handicaps rares. Les écarts de représentation et les inégalités d'accès

Anglais

Echo-Teddy: Un robot social alimenté par LLM pour soutenir les étudiants autistes

L'une des frontières les plus prometteuses de l'IA et de l'éducation spéciale est le mélange de la robotique et des modèles linguistiques pour soutenir la communication sociale. Un projet récent, Echo-Teddy, pousse dans cet espace — et il offre des leçons, des possibilités et des mises en garde pour les thérapeutes, les éducateurs et les cliniciens travaillant avec des populations neurodivers. C'est quoi Echo-Teddy ? Echo-Teddy est un prototype de robot social alimenté par un grand modèle de langue (LLM), conçu spécifiquement pour soutenir les étudiants atteints de troubles du spectre autistique (TSA). Les développeurs l'ont construit pour fournir une interaction conversationnelle adaptée à l'âge, combinée avec des capacités simples de moteur ou de geste. Contrairement aux chatbots liés aux écrans, Echo-Teddy occupe l'espace physique, permettant aux apprenants de s'engager avec elle comme compagnon social en temps réel. Le système est construit sur une modeste plate-forme robotique (penser Raspberry Pi et actionneurs de base) et intègre la parole, les gestes et les impulsions conversationnelles dans sa forme initiale. Dans la phase initiale, les concepteurs ont travaillé avec des commentaires d'experts et des réflexions de développeurs pour préciser comment le robot interagit : personnaliser le dialogue, adapter les réponses et ajuster les invites pour s'aligner sur les besoins de l'apprenant. Ils ont privilégié la conception éthique et les interactions adaptées à l'âge, soulignant que le robot ne doit pas dépasser ou remplacer la connexion relationnelle humaine. Pourquoi Echo-Teddy compte pour les praticiens Echo-Teddy est à l'intersection de trois tendances que beaucoup observent dans votre domaine : Considérations clés et défis Aucune innovation n'est sans compromis. Lorsque vous envisagez la pertinence ou le déploiement futur d'Echo-Teddy, gardez

Anglais

Évaluation des chats d'IA dans les conseils de santé fondés sur des données probantes : une perspective 2025

À mesure que l'intelligence artificielle continue d'imprégner divers secteurs, son application dans les soins de santé a suscité une attention considérable. Une étude récente publiée dans Frontiers in Digital Health a évalué l'exactitude de plusieurs chatbots d'IA—ChatGPT-3.5, ChatGPT-4o, Microsoft Copilot, Google Gemini, Claude et Perplexité—en fournissant des conseils de santé fondés sur des données probantes, axés plus particulièrement sur la douleur radiculaire lombosacrale. Aperçu de l'étude L'étude comprenait la pose de neuf questions cliniques liées à la douleur radiculaire lombosacrale aux dernières versions des chatbots anti-IA susmentionnés. Ces questions ont été conçues en fonction des lignes directrices établies en matière de pratique clinique (GPC). Chaque réponse de chatbots a été évaluée pour assurer la cohérence, la fiabilité et l'alignement avec les recommandations du CPG. Le processus d'évaluation comprenait l'évaluation de la cohérence du texte, de la fiabilité intra- et inter-évaluateurs et du taux de correspondance avec les CPG. Principales constatations L'étude a également mis en évidence la variabilité de la cohérence interne des réponses générées par l'IA, allant de 26 % à 68 %. La fiabilité intra-rater était généralement élevée, les cotes variant de « presque parfaite » à « substantielle ». La fiabilité entre les taux a également montré une variabilité allant de « presque parfaite » à « modérée ». Conséquences pour les professionnels de la santé Les résultats soulignent la nécessité pour les professionnels de la santé de faire preuve de prudence lorsqu'ils envisagent des conseils de santé générés par l'IA. Bien que les chatbots AI puissent servir d'outils supplémentaires, ils ne devraient pas remplacer le jugement professionnel. La variabilité de l'exactitude et du respect des lignes directrices cliniques suggère que les recommandations générées par l'IA

Panier