Web developers: you have a fantastic opportunity to make your web UIs more intelligent and productive than before. But don’t just throw on a chat pane and call it done, as people may not even use or like it. Let's explore how language models can integrate into your existing web UIs, anticipating your users' needs and completing their tasks faster.This is a technical, demo-centric talk with examples in Blazor, MVC/Razor Pages, and plain old C#, but the concepts would apply in other stacks too. We’re not going to get into the depths of how large language models (LLMs) work internally, nor will we focus on specifics of OpenAI APIs. Instead, we’ll focus on practical, complete, end-to-end usage patterns for AI in web UIs, making your users happier and more productive.
Ref: https://www.youtube.com/watch?v=TSNAvFJoP4M&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=29
In this session, we will explore what this model can do, and rather than just showing a perfect polished final demo, I will walk you through my entire journey of trying to use the model to solve Wordle puzzles, starting with "Hello World". Along the way, you will gain a good understanding of the model's capabilities, along with learning some prompt engineering techniques that drove progress in this journey (along with what didn't work!). We'll close with a live demo to attempt to solve today's Wordle! This session will tackle a fun problem, but the underlying prompt engineering techniques for image understanding that you will learn are applicable to a wide variety of business problems.
Ref: https://www.youtube.com/watch?v=sfpk4wQeS_g&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=37
The focus upon AI continues to be the predominant technology subject of the day; it’s the must-have feature of any new product or service; it’s at the forefront of many discussions about ethics, attribution and indeed our own future employment prospects.But increasingly, the term “AI” has become synonymous with only one flavour of artificial intelligence - that being “machine learning” (ML) - e.g. generative AI, applied AI, large language models (LLMs) and the such.However, there are many other types of AI, which until recently, were the mainstay algorithms found behind automated decision making solutions.Many of these concepts can be found in the video games we know and love.Are these other types of AI still relevant? Do they risk being drowned out, or forgotten, in the rush to embrace machine learning solutions?In this session, intended for the enterprise/business application developer, we’ll open a window into the world of the video game development.We’ll explore the type of algorithms that are a staple of game development: pathfinding, state machines, decision trees, and goal-oriented action planning.We’ll delve into some of the performance considerations necessary to keep these algorithms running efficiently.We’ll circle back to how the business application developer can use this type of AI in applications, and how the lessons learnt making video games can help us write better software.
Ref: https://www.youtube.com/watch?v=w30cK2ga42M&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=36
AI CoPilots are all the rage - but none quite offer that personalised butler service SciFi told us we might one day have.To understand what it takes to train a CoPilot, we will see how training a model works under-the-hood; discuss the importance of quality training data to craft a truly powerful and personalised experience, and safety or security concerns to consider when training a model on a public service.Moving beyond the (chat) box, we will leverage Azure's OpenAI Service and Semantic Kernel in .NET to create a custom AI CoPilot for internal applications or data. We will see how to train our own custom Codex model, for generating code and commands to perform bespoke tasks against a non-public API, plus some innovative ways to glue this together with a nice user experience.You will leave feeling excited about the power of custom CoPilots, and armed with the knowledge to build your own!
Ref: https://www.youtube.com/watch?v=QXPlIYd408A&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=35
As we enter the age of AI, the roles of programmers and designers are evolving. The convergence of design and code signals a narrowing gap, prompting us to question the future landscape of design. Will AI-driven innovation lead us to chart a new course, or will it see us walking down familiar paths?Drawing from my experience leading Design at GitHub, I’ll delve into my journey starting out as a designer who codes, to building and leading teams of hybrid designer-developers. I'll examine how blurring the traditional boundaries between design and engineering has shaped the role of Design Engineering in the future of software design. Join me as I explore the dynamic interplay between AI, design, and programming, and consider the exciting possibilities that lie ahead.
Ref: https://www.youtube.com/watch?v=c1uEHfmJMTM&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=34
It’s 1998. It’s the year of Britney Spears, The Spice Girls, the first Google Doodle, and the year Titanic dominated the box office.It’s also the year Hasbro gifted us with the Furby, the first successful attempt at an interactive robot pet. It divided the playground, created a generation of spooky sleepover stories and sparked the xmas riots of 99’.Two decades on, creatives and engineers have started to crack the secrets of the Furby, evolving and unlocking its full potential using today’s technology.
https://www.youtube.com/watch?v=2cW9KeFkMnE&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=33
It's time you meet your AI pair programmer. Do you find yourself stuck on a chunk of code? Unsure of how best to center a div? GitHub Copilot can help. Get unstuck by seeing suggested lines or code, whole functions, and learn more about your development journey through having code explained, and even translate your code into other languages.
Ref: https://www.youtube.com/watch?v=Mlviuph9QX4&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=32
Natural language processing using generative pre-trained transformers (GPT) algorithms is a rapidly evolving field that offers many opportunities and challenges for application developers. But what is a generative pre-trained transformer, and how does it work? How can you leverage the latest advances in GPT algorithms to create engaging and useful applications? Can my business benefit from creating a GPT powered chat bot?In this demo intensive session Alan will take a deep dive into the architecture of GPT algorithms and the inner workings of ChatGPT. The journey will begin by looking at the fundamental concepts of natural language processing, such as word embedding, vectorization and tokenization. He will then demonstrate how you can apply these techniques to train a GPT2 model that can generate song lyrics, showing the internals of how word sequences are predicted.Alan will then shift the focus to larger language models, such as ChatGPT and GPT4, demonstrating their power, capabilities, and limitations. The use of hyperparameters such as temperature and frequency penalty will be explained and their effect on the generated output demonstrated. He will then cover the concepts of prompt engineering and demonstrate how Retrieval Augmented Generation (RAG) patterns can be leveraged to create a ChatGPT experience based on your own textual data.
Ref: https://www.youtube.com/watch?v=P2cTtiirPnU&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=31
AI is due to revolutionize the life of a developer, with Microsoft leading the way, combining the public code base of GitHub.com with ChatGPT to product Copilot to speed code generation and increase developer productivity.However, this is just the latest in a set of tools and frameworks that have all had the goal of improving productivity. But have we lost something along the way?As soon as we start using tools, they will directly influence the way we work, and we need to be aware of when they are useful and when we should use them. This is Conway's Law applied to tools - the very tools you use change how you developer, and not necessarily for the better.This includes ORMs for DB access like NHibernate and EF, mocking frameworks, IoC frameworks, refactoring tools like ReSharper all the way to Copilot.Should we always lean so heavily on these tools? Will they be supported in the future? Are we deskilling future generations of programmers? Am I just an old grumpy developer? Will Jon Skeet no longer be required?Some of these questions may be answered in this talk.
Ref: https://www.youtube.com/watch?v=rLN9kSvMRXI&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=30
Web developers: you have a fantastic opportunity to make your web UIs more intelligent and productive than before. But don’t just throw on a chat pane and call it done, as people may not even use or like it. Let's explore how language models can integrate into your existing web UIs, anticipating your users' needs and completing their tasks faster.This is a technical, demo-centric talk with examples in Blazor, MVC/Razor Pages, and plain old C#, but the concepts would apply in other stacks too. We’re not going to get into the depths of how large language models (LLMs) work internally, nor will we focus on specifics of OpenAI APIs. Instead, we’ll focus on practical, complete, end-to-end usage patterns for AI in web UIs, making your users happier and more productive.
Ref: https://www.youtube.com/watch?v=TSNAvFJoP4M&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=29
The journey into AI integration shows that every single person's job—from developers to non-developers—has been impacted by this technology. Adoption starts with the basics: most users overlook critical steps like setting up Custom Instructions in ChatGPT to ensure clear, concise, and direct responses while avoiding unnecessary niceties and garbage. Furthermore, the rise of custom GPTs, built even by non-coders, demonstrates incredible low-code power, enabling the automation of enterprise tasks—like developer booking—that reduced the process time from 8 minutes down to just 30 seconds.
Ref: https://www.youtube.com/watch?v=upUZAZljueI&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=27
My talk, "I Connected My Farm To The Internet. Now What?", uses the Llama cam hobby project to explore product development under real-world constraints like a 100 gigabytes of internet data per month limit and zero budget. We integrated Home Assistant and custom AI for llama detection to provide value and continuously iterate. Key takeaways: Listen to your community feedback and remember to freaking build it.
Ref: https://www.youtube.com/watch?v=okqngjyTW88&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=26
Microsoft Security Copilot leverages generative AI to help overwhelmed security teams by summarizing complex incidents and generating crucial KQL queries using natural language prompts. This first-of-its-kind security AI operates at machine speed, leveraging Microsoft’s comprehensive global threat intelligence to make defenders more efficient and effective against the rapidly scaling challenges of modern cyberattacks.
Ref: https://www.youtube.com/watch?v=cCUMJ9ywfuQ&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=25
Struggling to make an impact or overcome networking anxiety? LinkedIn is a powerful, free tool that can help you shortcut your time to becoming a "Minimum Visible Person" (MVP). By establishing credibility from a distance and influencing by volume, you can use LinkedIn to access all areas, from the factory floor all the way up to the boardroom, positioning yourself for the career that you want.
Ref: https://www.youtube.com/watch?v=5Tg9leLR3fE&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=24
Deploying Generative AI applications at production scale demands careful attention to architecture and security, starting with the realization that large language models are entirely stateless and state must be constructed and passed through (e.g., via a database) to avoid losing conversation context and enable proper scaling. To achieve production readiness and control costs, developers should implement basic patterns like rate limiting for tokens and messages, restrict maximum payload size to prevent exhaustion attacks, and proactively utilize message analytics to monitor abuse and understand user behavior.
Ref: https://www.youtube.com/watch?v=hn2Dn3fLIfg&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=23
HTML is not just the foundation we build on, its vital in making our websites accessible usable and performant.We'll explore how we can make the most of our HTML elements and attributes to improve the performance and accessibility of our website and applications as well as boosting the efficiency of our development process.All by using a technology we are already use day to day, but just using it better than we were before. At the end of the talk you'll be able to make things more useful and useable, not just for performance and accessibility but for our users and different technologies, likes bots, applications and AI as well.
Ref: https://www.youtube.com/watch?v=zA4QzRGIP_w&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=22
Everyone has at some point wished they could clone themselves – to do the dishes, or work more efficiently. With advancements and improved accessibility of AI, this becomes more of a reality...This session will explore how to use Azure Custom Neural Voice to create a synthesised version of our own voice for use in a range of fun and practical applications – telephony, voice overs, or even attending meetings on your behalf...!We'll cover the basics of training and tweaking a model of our own voice, with minimal code, that can be used to turn text into natural-sounding speech.Advancing further, we'll learn some ways to train our model with enhanced training data. As an example we'll take a larger amount of audio, such as a podcast, use widely available tools and APIs to identify the individual speakers and the words being spoken – including highly technical language – to feed into our model.You will leave with the knowledge and inspiration to give "cloning" your voice a try!
Ref: https://www.youtube.com/watch?v=XkbvfSLO3yE&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=21
This session covers the ethical use of AI, detailing how to identify, understand, and proactively counter potential risks while sharing examples of impactful solutions built for the nonprofit and humanitarian sectors. The discussion emphasizes the necessity of implementing responsible AI principles, utilizing mitigation strategies like prompt engineering and grounding, and acknowledging that AI will always make mistakes.
Ref: https://www.youtube.com/watch?v=odWIkRcqEAU&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=20
Large Language Models (LLMs), including GPT, operate at their simplest level by attempting to produce a reasonable continuation of the text they are given, basing their predictions on patterns observed across a massive corpus of information like billions of web pages. Prompt engineering is an iterative process that employs various techniques—such as role prompting, few-shot learning, and Chain of Thought prompting—to increase the accuracy, reliability, and personalization of the output, which helps minimize uncertainty and build trust in the generative AI technology.
Ref: https://www.youtube.com/watch?v=1XrgOK-Ydl8&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=19
Recent advances in generative AI, exemplified by LLMs like Stable Diffusion and ChatGPT, have created significant industry hype. Generative AI involves creating new media (such as text or images) by analyzing massive datasets to deduce and mimic existing patterns, a process driven by probabilistic and stochastic modeling. While models like GPT can produce humanlike text, they operate as language prediction models rather than utilizing true reasoning (AGI), which means they often "stumble over facts," produce inconsistent results, and struggle with basic tasks like multiplication, leading to "hallucinations". To leverage these tools effectively, prompt engineering is necessary—this "subtle art" involves providing clear, specific instructions, setting a system context or persona, and potentially using examples to coax a useful result from the AI. When integrating AI via the stateless Completions API, developers must manually maintain conversation state by sending the entire history with each request, often summarizing older messages to manage token costs. More robust applications can utilize GPT Functions (Tools) to allow the model to intelligently call external functions—avoiding expensive model retraining—to access live or proprietary data. Alternatively, to query custom data using natural language, facts can be converted into high-dimensional vectors called embeddings and compared using cosine similarity against user queries, often managed in a database like Postgress with PG Vector. Finally, the newer Assistants API simplifies the development of domain-specific helpers by automatically managing message history and context compaction, and uniquely, when referencing uploaded knowledge files (like a lease document), it provides specific references or footnotes detailing where the answer was found.
Ref: https://www.youtube.com/watch?v=OxHw_u45h7M&list=PL03Lrmd9CiGey6VY_mGu_N8uI10FrTtXZ&index=18