AI has been a questionable addition to the 21st century with constant debates questioning the motives of a machine that has the ability to essentially re-design itself into an increasingly intelligent being beyond the comprehension of human conception. Some of the greatest scientists including Hawking and Stuart Russell have provoked the thought of human extinction with a rapid explosion of intelligence. In fact, Elon Musk (one of the founding members of Open AI) believes AI to be one of humanity’s “biggest existential threats”. Then why make something so controversial, available to the common public?
Despite being a dangerous creation, if done right, AI can prove to be the next big revolution to move mankind further head. So, Musk posed a counter-intuitive idea - instead of sitting on the sidelines and discussing the likelihood of bad things happening, what if we regulate the use of AI by creating a structure by involving people who care about it and making it safe for human consumption? If everyone has it, then there’s no one individual or organization that can control this superpower in the future.
OpenAI was founded in 2015 as a nonprofit research company by prominent entrepreneurs and researchers, including Elon Musk and Sam Altman (former president of YCombinator). The initial idea behind OpenAI was to create a research institute focused on developing friendly AI. Over the years the company has evolved to focus on a wide range of AI-related research and projects. Today, OpenAI is known for its cutting-edge research in areas such as deep learning and reinforcement learning, and for its efforts to promote the responsible development of AI.
Open AI started with nine researchers but it was difficult to put together a group of “best researchers in the field” when their individual cost exceeds even that of a top NFL quarterback prospect. But some researchers who were passionate about a futuristic project and keen on working with some of the brightest minds in the world, left behind “crazy salaries” to join forces with OpenAI. Now, with a top-notch team like this, top-notch results were only inevitable.
In 2019, Open AI transitioned from a non-profit to a “capped” for-profit and partnered with Mircosoft Corporation, which invested $1 billion in the company.
In June 2020, OpenAI announced GPT-3, or Generative Pre-trained Transformer 3 which came as a successor to GPT-2, one of the very first models trained on autoregressive language processing. GPT-3 is a deep-learning language model trained on trillions of words on the internet to produce human-like text and have more natural conversations.
In Jan 2021, OpenAI introduced DALL-E which produces high-resolution and accurate images based on text inputs. We’ll discuss DALL-E more in a bit.
For now, let’s address the elephant in the room.
On December 1st, Open AI announced ChatGPT which is a….actually you know what, let’s just ask ChatGPT about it.
Fair enough, ChatGPT, fair enough.
Within 5 days of launch, ChatGPT had over a million users. For context - Facebook, Spotify, and Instagram took 10, 5, and 3 months respectively to hit that milestone. And rightfully so. As social media started buzzing with new discoveries, people were fascinated with what this AI-powered research assistant could help achieve in the long run.
Something like ChatGPT is a research powerhouse. You can correct programs, write code, and of course, factually answer questions based on the thousands of Wikipedia pages that it has been trained on. People are doing everything from asking medically pervasive questions to making it write legal rent agreements. And despite all this, it does not use harmful or offensive language unless (as some users have been pointing out) provoked or promoted heavily.
Inspired by Salvador Dali and the dearly beloved character of Eve from the movie Wall-E, Dall-E is a large language model designed to be able to generate picturesque-like creations from mere text-based descriptions. Being a self-learning model, Dall-E is constantly learning and is able to make more accurate visualizations of text used to describe the desired output.
Like ChatGPT, Dall-E broke the internet, leading to some impeccable visuals that fascinated people around the world as to the capabilities of AI in the times to come.
OpenAI has its own community on Discord with over 55,000 members discussing everything from their creations to codes and even events. They even have weekend or special occasion events like the 'Dallify' event which is a competition with people creating visually stunning artworks from prompts.
OpenAI also has a community forum on Discourse with 15,000+ members. It’s a place where you can have discussions about AI, share your projects, feedback, tutorials, and so much more. There’s also a category for discussing advanced prompts to get assistance and better leverage OpenAI’s technology.
ChatGPT gets it.
AI will most definitely play a formidable role in shaping the future of our world as we know it and it’s also not news that building brand communities can be a token of sustainable growth and GTM strategies. In fact, what OpenAI stands for - providing a safe but scalable solution to make AI more abundant - directly coincides with the idea of community. Imagine what a community of people could achieve with AI as their driving force.
But the possibilities are limitless. Think of what AI can do for community and community tools. AI models or AI-powered community tools will likely play a large role in the management and moderation of brand communities. AI might be used to help identify and respond to customer queries and concerns or to monitor and moderate online discussions to ensure that they remain productive and on-topic. Additionally, AI might be used to help analyze and understand customer sentiment and preferences, allowing brands to better tailor their community strategies, products, and even approach to better tailor them for community members and users.
If extensively used, ChatGPT can produce everything from emails, content pieces, and even deliberate research materials that can be extremely helpful for community builders and managers. Now, imagine this at scale. What if you could do this without prompts or without manual intervention? AI can potentially become what does so many things better than a human can.
Moderating servers, for example, or making sure community guidelines are being adhered to. From sending emails and welcome messages to interacting with members and learning about their patterns - an AI can intricately intertwine with communities and help those who are building them.
By the way, ChatGPT thinks companies can benefit from community-led growth as the driving factor to not just meet long-term goals but also maintain a sustainable flywheel that can not only improve retention and customer relations but also drive more revenue through acquisitions.
Although OpenAI’s creations are increasingly becoming diverse and progressive, there’s a lot more ground to cover and many more interactions to perfect what this remarkable group of people are trying to achieve. OpenAI states that AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible. In addition, Sam Altman believes that overtaking human intelligence is a decades-long project under regulated circumstances.
Projects like ChatGPT and Dall-E are marvels of human creation, epitomizing what AI is (and will be) able to achieve when it’s done right. But they are still flawed. Apart from some obviously stated limitations like providing unreliable information at times or even harmful instructions, ChatGPT has many more shortcomings that make it appropriate for research but not so much for writing. Furthermore, the resources to bear the cost and produce so much computational power are not always feasible on many levels.
AI is self-learning and through constant iterations, OpenAI will be able to bring forward projects and ideas that are optimistically more intelligent than the likes of ChatGPT or even Dall-E. There’s so much more to give, so much more to explore. It’s never a dull day for AI.
Appsmith happened after multiple hands-on experiences of co-founding startups, learning from their failures for about 10 years and bouncing back stronger at it!
A deep-dive into how Airbyte scaled to 3500+ Slack users in a year with a community-first approach to growth.