21 au 23 février, 2024
Montréal, Canada

Conférence Intelligence Artificielle à Montréal

Intelligence Artificielle In this talk, we will introduce a novel design pattern called "micro-mind-services," inspired by neural networks, which promises to revolutionize how we build applications, particularly in conjunction with AI programming assistants.

Join me to discover this potentially groundbreaking design pattern and explore how it can transform software development in the age of AI programming assistants.
Intelligence Artificielle Opérer un logiciel basé sur du machine learning peut impliquer de gérer un parc de modèles conséquent. Expérimentations, modèles spécifiques ou pipelines de traitement à plusieurs étapes, peuvent rendre les phases de déploiement complexes. Comment aider les data scientists à réduire cette complexité et diminuer le temps passé à ces activités de déploiement ?
Voyons comment gérer les artefacts, configurations & leurs versions avec TorchServe!
Intelligence Artificielle This talk delves into the present and future of AI in SQL and data analytics. It shows how AI boosts SQL-based analytics for quicker, more precise insights. We'll tackle challenges like data quality, interpretability, and privacy, exploring ongoing research in explainable AI and privacy preservation. Future advances involve automated query optimization, smart data visualization, and integration with emerging database architectures.
Intelligence Artificielle Explore bio-inspired algorithms like Firefly and Ant Colony optimization. In this talk we'll delve into the application of these and other algorithms devised by nature herself, analyzing their pros and cons, and optimal usage. Well also see the power of these algorithms by using one to train a machine-learning model. You'll leave with the tools you need to implement these algorithms in your preferred language.
Intelligence Artificielle Are you intrigued by the possibilities of running your very own large language model? Join me as we delve into the hardware requirements, step-by-step processes, and limitations of openly available LLMs. From crafting your home Jarvis to building your office librarian.
Learn the essential tools, key definitions, and see practical code examples. Unleash the potential of AI while keeping your data secure.
Intelligence Artificielle In this talk, delve into crafting a mini-Gemini Chatbot using Google's latest Generative AI via Google AI Studio, Gemini Pro model, and Angular. Uncover the potential of Gemini foundational models, AI Agents, tool integration, API calls, and advanced features like Retrieval Augmented Generation (RAG) for enhanced grounding and expanded training data. The Google Gemini era has arrived!
Intelligence Artificielle Apprenez à fine tuned un pre trained model open source Llama2 pour lui apprendre à réaliser un quiz de plusieurs questions à partir d’un PDF!
Les thèmes couverts sont:
- Génération de données synthétiques
- Fine tune de Llama2
- Déploiement via RunPod dans le Cloud
Intelligence Artificielle Avez-vous déjà songé à mettre en place un chatbot sur votre base de connaissances (KB) ? Je vous propose de découvrir une solution simple pour le réaliser en utilisant le langage Python, les capacités d'un Large Language Model (LLM) tel que GPT-3 pour la partie discussion en langage naturel, l'API d'Open Al GPT-3 via la librairie LangChain, ainsi que le framework Web Python Streamlit pour l'interface graphique Web.
Intelligence Artificielle Prompts play a crucial role in communicating and directing the behavior of Large Language Models (LLMs) AI. Semantic Kernel is like a mixologist who looks at available ingredients and crafts new cocktail recipes. Semantic Kernel looks at different plugins, prompts, and memory stores to create an execution plan. Attend the presentation to learn about Semantic Kernel and build an AI Mixologist creative enough to suggest new cocktails. Cheers!
Intelligence Artificielle Symbolic techniques perform explainable predictions and the models are essentially immune to hallucinations. But, they require significant manual effort to scale (languages, frameworks, analysis & ruleset). Neural techniques are a promising new venture, however, their capabilities to discover bugs in code are limited by hallucinations and in the end are not significantly better than guessing. Let's combine them to overcome the shortcomings.
Intelligence Artificielle Over the years, we've seen countless shifts in how applications are built. New languages and frameworks have streamlined development. Visual and no/low-code have empowered many to create apps. And now we have AI tools to develop code "for us." With these changes, where does this leave our developers? Is writing code obsolete?

Let's talk about where we are, where we're going, and how to get the most out of this new tool.
Intelligence Artificielle Both Golang (Go) and Python are popular programming languages, each with its own set of advantages and disadvantages. Python is by far the easiest one to setup and use to validate the model. This rapid prototyping and flexibility of Python are highly shadowed by the performance of the model when compared to Golang. Let’s look at how we deployed object detection model with OpenCV on those two programming languages.
Intelligence Artificielle Unraveling the mystery of embeddings from Large Language Models, this talk explores their mathematical foundations and applications. It goes beyond chat and analytics, demonstrating their operational use in C#. Tools like Cosine similarity and clustering are used for comparison. This talk is ideal for developers, but also valuable for data scientists, and those curious about the math behind machine learning and natural language processing.
Intelligence Artificielle In the past months, AI has disrupted everything. It changed the way we interact with computers, the way we work and it created a whole new set of opportunities.

But how can you take advantage of AI in your existing applications? With Azure AI services, you can easily integrate AI capabilities into your existing applications, taking them to the next level.

Join me in this session to see how we can do this.
Intelligence Artificielle Large Language Models (LLMs) are powerful. However, they don't have the most up to date information. In this session, Shao Hang He will show you how to feed external data to augment your LLM prompts using Python, LangChain and OpenAI. This includes setting up a Vector Database, converting data to embeddings and making queries. Also, we will go over several examples such as chatting with a PDF and smart search your messages in your email inbox.
Intelligence Artificielle The ascent of machine learning, especially large language models (LLMs), comes with exorbitant infrastructure costs, making it almost unfeasible for individuals to run such programs independently. Amazon, Azure, and Google Cloud dominate with over 60% market share. This concentration challenges the decentralized essence of the internet, potentially impacting innovation, quality, and freedom.
Intelligence Artificielle Prompts are like magic spells, using words to achieve impossible effects but requiring complex rules. AI agents are like a wizard who consults spell book to cast a series of spells. AI Agents use a large language model (LLM) as a reasoning engine to determine how to interact with the outside world based on user input. Attend this session to learn how to build AI Agents in JavaScript using LangChain and other prompting techniques. Alohomora!

Explorez les 171 présentations

Montréal 2024 sponsored by