Skip to content

Insight and analysis of technology and business strategy

AI in Full Bloom: 2023 Years End Snapshot of Generative AI

After closing out 2023, one topic undoubtedly dominated the technology industry for this year: Generative AI.

Starting with the release of ChatGPT back in November 2022, the world was taken by surprise by this interesting chatbot that could converse in natural language but was also quite skilled in generating free-form text content, even including the ability to code. And so the age of Large Language Models (LLMs) began.

Going forward a mere 12 months, we have seen the industry explode with new AI models, use cases, tools, and a full-on race to stay ahead on the frontier of Gen AI backed by massive amounts of resources and capital. We even had one of the craziest, most dramatic weekends in the industry when Sam Altman was ousted and reinstated as the CEO of OpenAI, all within the span of 96 hours or so. If you’re interested in that chain of events, there’s a great summary here.

Drama and intrigue aside, in this blog post, I want to provide a summary of the current state of the technology as we draw near to the end of 2023, the year that will be remembered as the year when AI went mainstream. Let’s go!

Commercial Foundational Models

These models are the cornerstone of commercial LLMs, and there is a wide variety available now, with different strengths and weaknesses, and provided by different parties.

On the OpenAI front, the flagship GPT-4 model was released in March 2023 and remained their top model until November. At the first OpenAI Dev Day event, Sam Altman introduced the new flagship GPT-4 Turbo. This new model has the same capabilities as GPT-4 but does faster and cheaper inference as well as increasing the context window to 128K tokens. OpenAI also debuted their GPT-4 with a Vision model that can consume images as well as text as part of the user interaction. These “multi-modal” models go beyond the scope of just text and are also starting to interact through images and, most likely, soon, audio and video.

Due to the partnership between Microsoft and OpenAI, Microsoft has debuted the Azure OpenAI service, where you can host these models inside your Azure subscription. 

source: microsoft.com

One of OpenAI’s biggest competitors is Anthropic. Their flagship model is Claude-2, and the latest version is Claude-2.1. One particular differentiator of Claude-2.1 is that it can support a context window of up to 200K tokens, the largest of the currently available commercial models. Due to investment and partnerships, Claude is available both on AWS as well as Google Cloud.

AWS also debuted their Bedrock service to host Gen AI models. As I just mentioned, Anthropic’s Claude is hosted there, as well as Amazon’s own family of models called Titan.

And last but definitely not least is Google. Google debuted their PaLM 2 model back in May, but the biggest announcement was the public debut of their Gemini family of models this December. Gemini is Google’s new “state-of-the-art” family of models, coming in 3 versions: Nano, Pro and Ultra. Pro is already available, powering their Bard chat assistant, and Ultra is planned for release early next year. The anticipation for Gemini Ultra is extremely high as Google has released demo video and benchmarks suggesting it’s a strong contender to GPT-4 and includes multi-modal capabilities as well. And, of course, Google also offers the capability of running these Gen AI models inside Google Cloud under the umbrella of their Vertex AI service.

source:google.com

Open Access Foundational Models

On the open access front, some of the most notable models are Meta’s Llama2, Mistral’s 7B model, and the Falcon model family from the Technology Innovation Institute in Abu Dhabi. This, of course, is not an exhaustive list as the open access space goes as fast as the worldwide community goes, and new models, either from other research teams or fine-tuned by end users, get uploaded to repositories like HuggingFace on a daily basis.

I also intentionally used the term “open access” instead of “open source,” as there is a current debate in the community about whether we can call some of these truly open sources. For example, Meta’s Llama2 is offered as an open download, and you can run it locally or in any cloud, but Meta doesn’t provide the training dataset or the training code and parameters used to generate the model. So is the model really “open source,” or is it just open access? I will let the reader decide.

AI Assistants

The most obvious application of LLM technology has been the implementation of smart assistants for end users. ChatGPT, of course, is the prime example and the number one application that people think of when you mention the term AI. 

Beyond ChatGPT, Microsoft incorporated AI into their own Bing Chat product integrated with their Edge browser. To go beyond their own browser, Microsoft has released copilot.microsoft.com, a browser-based AI chat experience that can run on any Chrome-based browser on Windows or Mac. And, of course, Microsoft is bringing the technology to Azure, Microsoft 365, and Github with dedicated copilots for all of these.

AWS announced at their Re:Invent conference their assistant called “Q” they will be placing Q in the AWS console as well as offering it as a plugin on popular tools and IDEs.

Google has a consumer ChatGPT competitor called Bard as well. Bard was initially powered by PaLM 2 but is now powered by the Gemini Pro model. Inside Google Cloud and Google Workspace, there is also a family of smart assistants under the umbrella of Duet AI. For example, when you are working in the BigQuery studio, Duet AI will suggest SQL to you, or it can explain SQL queries in natural language.

LLM Orchestration

As Generative AI goes beyond the initial consumer chat scenarios and initial enterprise prototypes, it becomes pretty clear that it is not enough to simply have a clever prompt to chat with a model. You will need to bring extra context for the model from your knowledge base documents or from real-time operational systems. Once the LLM produces a response you might format it in a specific way to fit your downstream consumer (GUI, API, or a subsequent LLM context). This process of coordinating the different steps that are part of your LLM interaction is LLM orchestration.

In this space, we have seen the rise of popular open-source libraries like LangChain and LlamaIndex that integrate with all sorts of data sources. Some of the hyper-scalers, like Microsoft, for example, have also created their own first-party tooling. For example, Azure has a “low-code” experience called PromptFlow, and both Google Cloud and AWS offer “managed RAG (retrieval augmented generation),” where they streamline the process of bringing your own data into the LLM. Microsoft is also working on their own open-source LLM plugin development and automation library called Semantic Kernel

Source: LangChain.com - Vector Stores flow

Image Generation

Generative AI has also dramatically improved in the space of image generation from the beginning of the year to now. We have seen the models go from creating comedic monstrosities to being able to generate artistic renditions or photorealistic images that go beyond the uncanny valley and are very hard to distinguish from a real photograph.

In the commercial space, MidJourney is a very popular service that is focused on image generation and has evolved extremely fast with a very high quality of output. OpenAI also has their Dalle family of models. Their latest model, Dalle-3, is a massive leap from their previous Dalle-2 model, with big improvements in image quality, anatomy, composition, and control. Google also has their Imagen text-to-image models, and most likely, Gemini Ultra will also offer image generation capabilities.

In the open access space, Stable Diffusion has gone from version 1.5 to 2.1 to the latest SDXL and SDXL Turbo. As is usually the case with the open access space, there are also hundreds of community tools that have arisen to generate images (Automatic1111, SD Next, ComfyUI), train styles and people's likeness into them (Dreambooth, EveryDream), and control the image generation with a large amount of precision (ControlNet).

Source: “Photorealistic image of a female astronaut on the space shuttle” - generated with SDXL

Audio and Video Synthesis

As capabilities evolve, Gen AI is moving beyond text and images and into audio and video. There are some interesting services available now, like RunwayML’s text or image-to-video generation. You can try this out on your phone today, you can give the app an image, and it will generate a short 4-5 second video based on your image and an optional text prompt that you can add to describe what you want in the video.

From the audio perspective, we also have text-to-speech capabilities that are moving away from the classic robotic voice from previous solutions and moving towards naturally sounding voices that narrate with the modulation and intonation that a human narrator would use. For example, ElevenLabs has a very popular text-to-speech service with thousands of available voices as well as the capability of training your own voice into the AI. They even offer “speech to speech” capabilities to transform one voice narration into another voice while also being able to fine tune the target new voice. 

As expected, AWS, Azure, and Google also offer transcription services as well as text-to-speech capabilities. Interestingly, one of the top transcription models is actually an open-access model from OpenAI called Whisper, which is now on the Whisper 3 version and can be run anywhere.

Looking forward to 2024

I’m just scratching the surface here in terms of all the models, tools, and services that came out in 2023. I can only imagine that 2024 will be even more action-packed. Things I’m looking forward to:

  • Google releases Gemini Ultra, allowing us to go hands-on with this model and test out the new multi-modal capabilities.
  • OpenAI/Microsoft, will we see a GPT-4.5 or a GPT-5? Or audio and video capabilities built into GPT -4-like vision.
  • Image, audio, and video generation continue to improve on a daily basis, so I expect better tools, higher fidelity, and more fine-grained control over generations. I also believe we will see regulatory frameworks coming out to prevent the abuse of these capabilities in terms of generating deep fakes, intentional misinformation, etc.

And ultimately, I hope everyone gets a chance to learn more about the technology and leverage it to improve their day-to-day work and life. Happy New Year, everyone!

Top Categories

  • There are no suggestions because the search field is empty.

Tell us how we can help!

dba-cloud-services
Upcoming-Events-banner