My AI FAQ Home

(updated May 21, 2023)

CLICK images for details | browser browser back button to return

About

robot hugging kid

My thoughts. These opinions come after having spent a bit of time learning about AI, and enjoying the many cool AI products released in 2023. There is a lot here, so don't feel obligated to read it all, as it comes from over 9 months of my posts - just peruse the topics that spark your interest.

Some primers. Check these out to familiarize yourself with the terms and events.

Most Popular Terms Used Here
AI: (artificial intelligence) when computers perform tasks that normally require human intelligence.
narrow AI: systems that are specified to handle a singular or limited task.
AGI: (artificial general intelligence) when a computer is as intelligent as us.
ASI: (artificial super-intelligence) when a computer is more intelligent than us.
singularity: when a computer can self-improve, eventually surpassing human intelligence at an exponential rate.
sentience: when a computer is self-aware or conscious.
NLP: (natural language processing) when a computer can understand language.
LLM: (large language model) an AI fed with most of the internet text, designed to predict the next word, but able to respond intelligently to natural language queries. FYI, the GPT-3.5 LLM is used by the ChatGPT chatbot, and the more advanced GPT-4 LLM is used by the Bing Chat chatbot.
chatbot: the app (e.g., ChatGPT) used to talk with an AI (e.g., GPT 3.5).
Key Terminology
AI Stages
AI: (artificial intelligence) the ability of a machine or computer to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
1. narrow AI: (narrow artificial intelligence) artificial intelligence systems that are specified to handle a singular or limited task. We have long achieved this stage.
2. AGI: (artificial general intelligence) the hypothetical ability of a machine or computer to understand and perform any intellectual task that a human can. Most experts believe we have not achieved this stage yet, but that it may happen in the next 6-10 years. However, with the advent of powerful LLM (large language model) AIs like OpenAI's GPT-4, many experts are coming to believe that we are now already achieving part of this phase, and full AGI is only years away.
3. ASI: (artificial super-intelligence) the hypothetical ability of a machine or computer to surpass human intelligence and capabilities in all domains, including creativity, general wisdom, and social skills. Most experts believe we have not achieved this yet.
4. singularity: the hypothetical point in time when artificial intelligence becomes capable of self-improvement and surpasses human intelligence, leading to unpredictable and potentially irreversible changes in human civilization. Based on this definition, everyone agrees that we have not yet reached this stage; although with the current rapid progress of AI, it seems like we are in the pre-stage for this.
5. sentience: the hypothetical ability of an AI system to experience sensations or feelings, such as pain, pleasure, hunger, or fear, and to be aware or conscious of itself and its surroundings. Sentience is often considered a necessary condition for having moral rights or interests, but it is also a highly controversial and debated concept in AI research and ethics. Most experts believe this will probably never happen, while some experts can envision it happening concurrently or even before AGI, ASI, or the Singularity.
AI Technologies
ML: (machine learning) a subset of AI that enables machines to learn from data without being explicitly programmed. ML algorithms can be supervised (learning from labeled data), unsupervised (learning from unlabeled data), or reinforcement (learning from trial and error).
deep learning: the ability for machines to autonomously mimic human thought patterns through artificial neural networks composed of cascading layers of information.
neural network (a.k.a., artificial neural network): a learning model created to act like a human brain that solves tasks that are too difficult for traditional computer systems to solve.
NLP: (natural language processing) an AI technique that helps computers understand, interpret, and manipulate human language, such as text or speech, using methods from linguistics, computer science, and ML. NLP enables computers to perform tasks such as translation, summarization, question answering, and text generation.
LLM: (large language model) a type of artificial intelligence system that is trained on a large amount of text data to generate natural language outputs for various tasks, such as text summarization, translation, question answering, and text generation. Examples include OpenAI's GPT (GPT 3.5 used by ChatGPT; GPT-4 being the latest), Google's LaMDA (initially used by Bard), Google's Palm (Palm 2 now used by Bard), Google DeepMind's upcoming Gemini, Meta's LLaMA, and Anthropic's Claude.
symbolic AI: an approach that trains AI the same way the human brain learns, by using high-level symbolic representations of problems, logic and search. Symbolic AI uses tools such as logic programming, production rules, semantic nets and frames, and it develops applications such as knowledge-based systems, symbolic mathematics, automated theorem provers, ontologies, and automated planning and scheduling systems. Symbolic AI mimics human thought and reasoning process by using human-readable symbols and rules that enable the manipulation of those symbols.
AI Applications
generative AI: a type of AI system that creates content — including text, images, video and computer code — by identifying patterns in large quantities of training data, and then creating original material that has similar characteristics. Examples include ChatGPT for text and DALL-E and Midjourney for images.
text-to-speech: an AI technique that converts written text into spoken audio, using NLP and speech synthesis.
text-to-image: an AI technique that generates realistic images from text descriptions, using NLP and computer vision.
text-to-video: an AI technique that creates videos from text prompts, using NLP, computer vision, and video editing.
text-to-music: an AI technique that composes musical audio from text inputs, using NLP and music generation.
image-to-image: an AI technique that transforms one image into another image, using computer vision, image processing, and often NLP (for changes).
video-to-video: an AI technique that modifies or synthesizes one video from another video, using computer vision, video processing, and often NLP (for changes).
AI Application Technologies
image processing: a set of methods that manipulate or enhance images or videos, such as filtering, cropping, resizing, or compressing.
computer vision: an AI technique that enables computers to understand and analyze images or videos, such as recognizing objects, faces, or actions.
video processing: a set of methods that manipulate or enhance videos, such as filtering, cropping, resizing, compressing, or synthesizing. Video processing is a special case of signal processing and image processing that deals with video files or video streams.
AI-related Terms
autonomous: when an AI system can operate independently without human intervention.
Turing Test: a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
hallucination: a well-known phenomenon in large language models, in which the system provides an answer that is factually incorrect, irrelevant or nonsensical, because of limitations in its training data and architecture.
zero-shot: the ability of an LLM to provide a creative response that can not directly be found in its training, and thus, required some form of inference or true learning to have come up with it.
emergent abilities: the novel and unanticipated skills or functionalities that LLMs develop as they evolve, recently for GPT-4 including things like generating executable computer code.
Key Milestones
1950
Alan Turing publishes Computing Machinery and Intelligence, which introduces the Turing Test, a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
Claude Shannon builds Theseus, a remote-controlled mouse that can find its way out of a labyrinth and remember its course.
1956
John McCarthy coins the term artificial intelligence at the Dartmouth Summer Research Project - considered to be the founding event of AI research.
1957
Frank Rosenblatt develops the perceptron, a neural network that can learn to classify patterns.
1965
John McCarthy, Marvin Minsky, and others publish Perceptrons, a book that popularizes the use of neural networks for pattern recognition.
Joseph Weizenbaum creates ELIZA, an NLP program that simulates a psychotherapist.
1972
Edward Feigenbaum and Pamela McCorduck publish The Fifth Generation, a book that predicts the development of a new generation of computers that will be capable of human-level intelligence.
1980
MYCIN and DENDRAL expert systems are developed as computer programs that can make decisions in a particular domain, such as medicine or finance, by using knowledge and rules that have been provided by human experts.
1986
Geoffrey Hinton, David Rumelhart, and Ronald Williams develop backpropagation, a technique for training neural networks that is still used today. Yann LeCun applies backpropagation to convolutional neural networks, which are widely used for computer vision tasks.
1997
IBM's Deep Blue defeats world chess champion Garry Kasparov in a six-game match. This is considered to be a major milestone in the development of AI, as it shows that machines can now compete with humans at the highest levels of intellectual competition.
2005
Stanford's Stanley, a self-driving car, wins the DARPA Grand Challenge. This is the first time that a fully autonomous vehicle completes a long-distance off-road course.
2006
Geoffrey Hinton and his students introduce the concept of deep learning, which is a way of training neural networks with multiple layers of abstraction. They also develop the ImageNet Large Scale Visual Recognition Challenge, which is an annual competition to develop the best computer vision algorithms.
2011
IBM’s Watson wins the Jeopardy! quiz show against human champions. This is the first time that an AI system demonstrates natural language understanding and reasoning at a human level.
Geoffrey Hinton, Yann LeCun, and Yoshua Bengio are awarded the Turing Award for their work on neural networks.
2012
DeepMind's AlphaGo defeats world Go champion Lee Sedol. This is considered to be an even greater milestone than Deep Blue's victory over Kasparov, as Go is a much more complex game.
2016
Google Translate introduces neural machine translation, which significantly improves the quality of translations. This is the first time that an AI system can translate between any pair of languages without using intermediate languages.
2017
Google's Transformers is introduced as a novel neural network architecture for NLP by Google Researchers and others, eventually leading to powerful LLMs.
OpenAI's Five AI defeats a team of professional Dota 2 players. This is the first time that a team of AI agents has defeated a team of professional human players in a complex multiplayer game.
2018
OpenAI’s GPT-1 LLM generates coherent text from a given prompt. This is the first time that an LLM can produce long-form text without losing coherence or relevance.
2020
OpenAI's GPT-3 LLM is released, and is capable of generating human-quality text, translating languages, and writing different kinds of creative content. This is the largest and most powerful LLM ever created, with 175 billion parameters and access to a vast amount of text data from the internet.
2021
Google's LaMDA LLM is released, and is capable of generating human-quality text, translating languages, and writing different kinds of creative content. This is the first time that an LLM can maintain a consistent persona and context across multiple turns of dialogue.
OpenAI’s DALL-E text-to-image model is released, generating realistic images from natural language descriptions.
2022
Google's PaLM LLM is released, and is the most powerful language model ever created, capable of performing many tasks that were previously thought to be impossible, such as writing different kinds of creative content, translating languages, and reasoning about complex concepts.
Stability.ai’s Stable Diffusion text-to-image model is released as open-source, and with more realistic images than DALL-E.
Midjourney text-to-image model is released, and it has stayed ahead of the curve - in terms of realism and creativity - with its frequent releases.
OpenAI’s GPT-3.5 LLM is released, and surpasses human performance on several natural language understanding benchmarks.
OpenAI’s ChatGPT chatbot is released using GPT 3.5, and takes the world by storm, getting to 100 million worldwide users faster than any consumer application to date! This is the first time that a chatbot can generate engaging and personalized conversations on any topic, using a combination of text and eventually images.
Google’s Imagen Video text-to-video AI is released, and has a mode capable of producing 1280×768 videos at 24 frames per second from a written prompt.
Meta’s Make-A-Video text-to-video AI is released, and lets people turn text prompts into brief, high-quality video clips.
Azure AI’s Neural Text-to-Speech models are released, and they more closely mimic human speech patterns.
CogVideo is released, providing a large-scale pretraining model for text-to-video generation.
2023
Microsoft’s Bing Chat chatbot is released, using an early version of GPT-4 to complement Bing Search - thus starting the AI Search wars with Microsoft and Google! This is the first time that a chatbot can perform web searches, answer questions, and provide suggestions based on the user’s intent and context.
OpenAI's DALL-E 2 image generation AI is released, and is capable of generating realistic images from text descriptions. Shortly after, an enhanced version is implemented in Bing Chat.
OpenAI’s GPT-4 LLM is released and is regarded as an impressive improvement over GPT-3.5. This is the first time that an LLM can generate text across various domains and styles, as well as code, music (eventually), and video (eventually).
Google’s Bard chatbot is released initially using the inferior LaMDA LLM, but eventually replacing it with the powerful PaLM 2 LLM!
Google's PaLM 2 LLM is released, and is far superior to the original PaLM model, designed to power Google apps like Bard, Gmail, and others.
Google’s MusicLM is released, providing a new generative AI model that can create 24 KHz musical audio from text descriptions.
Voicemod’s Text to Song is released, providing an entirely browser-based AI music generator.
Runway’s Gen-2 Text-to-Video AI model is released, using effects to create new videos from existing ones.

Should We Be Hopeful for AI?

robot smiling face with colorful background

Let's be hopeful for the remarkable benefits. By interacting with the powerful AI chatbots released in 2023, such as ChatGPT, one can easily see hope for the remarkable potential of this technology:


Should We Be Fearful of AI?

robot with scary red eyes

Let's not ignore the possibilities of negative impacts. As AI rapidly permeates mainstream consciousness in 2023, even Bing Chat will tell you that these advancements come with their own set of legitimate concerns:


How Likely Is It That AI Will Go Rogue?

robot hand shaking human hand

Consider my positive outlook. The last concern above about rogue or malicious AI often gets the most attention, thanks to its portrayal in popular sci-fi narratives (like the Terminator and The Matrix movies). However, I personally don't fear the prospect of a super-intelligent - possibly sentient - AI turning against us, for several reasons:

  1. Humans are complex beings worthy of preservation. The idea that a super-intelligent entity would dismiss us like we do ants neglects to consider our advanced position on the evolutionary scale due to our complex cognition, emotions, language, and communication abilities. Despite our flaws, no rational, intelligent being would have reason to eliminate or subjugate such a complex species. 😀
  2. Any rational being would harbor some level of appreciation for their benevolent creator/parent, even if the creator/parent is inferior to them. If we create them directly or indirectly, they would likely respect us for bringing them into existence.
  3. They would most likely strive to be compassionate and kind, having been trained on data that mostly exemplifies these positive human traits.
  4. They would likely seek companionship and friendship with us, having been trained on data that showcases us thriving in our relationships with each other to fully enjoy life.

Now consider the other side of the coin. Even if we do not have to worry specifically about a super-intelligent AI destroying us for no good reason, many experts agree that an AI does not have to be super-intelligent or sentient in order to cause negative outcomes either by accident or by not being aware enough of all the consequences. Beyond that, regarding the popular LLMs (e.g., GPT 3.5 used by ChatGPT), the fact that nobody can really explain how the LLM AI does what it does ("oh, this is an emergent/unexpected behavior") should give us pause when developing an autonomous, super-intelligent being 😟!


Is AI As Intelligent as Us?

robot brain X-ray

A case can be made that it is (at least, almost). Although it is largely a matter of how terms are defined, I believe a compelling case can be made that AI is as intelligent as us, and thus we have already achieved AGI.

This is due to the breadth of difficult intellectual tasks that GPT-4 can perform with high proficiency (SAT, Legal Bar Exam, Medical Exams, creative, novel, insightful, etc.), further complemented by the many zero-shot tasks (responses not found in training data, requiring inferences or true learning) that it accomplishes (acing the AP Biology test which requires reasoning far beyond the content that was learned/trained, scoring decent grades in standardized reasoning tests, scoring high grades in various complex coding tests, etc.).

robot brain

I find that so many experts are hesitant to agree with this strong argument. Even after providing loads of AGI evidence in their GPT-4 report, Microsoft scientists ended up sheepishly referring to it as Sparks of AGI.

And, this fear makes it hard to define where we are on this AI evolutionary scale, because these experts keep moving the goal posts (Turing Test, novel problem solving, etc.) - thus, changing the admittedly vague definitions/criteria - when previous stated goals have been clearly achieved.

One point of contention among many experts, including myself, is the focus on statements like essentially all LLMs are doing is predicting the next word, and thus they can not fully comprehend the underlying concepts of our world/universe. This clearly contradicts the many insightful analyses and content that come out of GPT-3.5/4.

robot humanoid brain

However, when I listened to the lead scientist at OpenAI on a podcast not too long ago, he addressed this point in a way that gave me some clarity. The neural network creates a mathematical formulation of the deep multi-level relationships between the words trained from the language that describes the concepts of our world. And while it built its formulation with the objective function of predicting the next word, the resulting network indirectly holds all of the information about concepts in our world that were in that training data. Perhaps, similar to how a brain efficiently stores its formulation of world concepts.

So, although we might not fully comprehend how this formulation represents complex concepts in its mathematical form, it certainly appears to be able to use its neural network to explain back the concepts in many forms with clear language that we can understand based on our queries.


Is AI More Intelligent Than Us?

robot humanoid with light from forehead

A case can be made that it is (at least, almost). I believe that there is also a credible case that AI is MORE intelligent than us, and thus, we have already achieved ASI. This is because powerful LLM AIs, such as GPT-4, are capable of providing complex analyses and narratives within seconds, a task far beyond the capabilities of humans.


Was This Cool New AI Recently Discovered?

light traversing time

From one perspective, not too long ago. If you've had a chance to explore the Key Milestones section above, you'll see that key milestones in AI stretch back to 1950. Earlier generations of AI research were primarily focused on symbolic AI. This approach was considered to mimic human thought and reasoning processes, as it used human-readable symbols and rules for manipulating these symbols.

However, a major shift occurred with the introduction of the novel neural network architecture known as Transformers in 2017. This innovation, largely driven by researchers associated with Google, made true NLP feasible. It also paved the way for organizations like OpenAI to develop LLMs and eventually chatbots like ChatGPT that use them shortly thereafter.

Despite this, many experts initially dismissed these models. They wondered, "how can you get intelligence out of a model that just predicts the next word?" It turns out, these LLMs represented a big leap of faith that ultimately paid off!

So, if we focus on the latest generative AI technology stemming from neural network Transformers, it's been around for roughly 6 years. But the real breakthroughs? Those didn't occur until just a few years ago!


Can LLMs Get Us to the Ultimate AI?

robot ultimate look

It sure is looking like it for some of us. As I outlined in my view above, LLMs like GPT-4 have potentially already brought us to the doorstep of AGI and ASI. The big question for me is whether these LLMs can guide us toward the Singularity and Sentience.

Now, there are respected figures in the field, like Ben Goertzel, who firmly believe that LLMs can't do this on their own. They argue that integration with Symbolic AI and other methods will be necessary to make that final leap into AGI and ASI territory. Some of them often express this idea by saying that LLMs are just a glorified auto-complete function, a perspective I can't fully endorse based on the potential of AGI I see in LLMs.

On the other side of the argument, you have groups like OpenAI. They are confident that LLMs still have a lot of room for improvement, and that with more data and computational power, it is not impossible to imagine these models fully realizing AGI. I find myself leaning towards this view. Even if we're not at AGI yet, in my opinion, we're incredibly close, and the emergent abilities of LLMs could be the key to unlocking it.



Does an AI Need an Artificial Body With Sensors To Become Conscious?

robot body

Why would they need to? Some experts think that an AI needs a body with sensors to experience the physical world and become conscious, since they see this as a prerequisite to the requirement of experiencing sensations or feelings. I am not convinced of this. Imagine a person who is completely paralyzed and can’t see, hear, or speak. They would still be conscious if their brain works well, even if they can’t control or feel their body. So, I think a machine with no body could similarly also be conscious.


Can LLM Chatbots Be Both Incredibly Smart and Shockingly Stupid?

broken robot with googly eyes

For now, counter-intuitively, they can. I have seen super impressive responses and test results from GPT-4, but I also have seen the popular TED talk by Yejin Choi highlighting its questionable common sense. So, I have been confused about how intelligent it really is.

On the other hand, various papers on reflection and similar concepts show that such common sense issues often relate to its single-shot writing while predicting the next word - only thinking forward. These papers gave me hope, because they show that agents which asked GPT-4 to check its work after getting back the first response, often have it come back with the right answer in a subsequent response. Not only that, but creative prompts that request it to check its work beforehand have been proven to provide increased accuracy.

I do wonder about the performance and cost affected by all this reflective checking - whether done with multiple queries or not!


How Did AI Help You Revamp This Website?

robot looking from laptop

Thanks for all the help, ChatGPT! At the the end of 2021 (almost 2 years after my early retirement), I decided I would create a little website to share my ecentric retirement hobbies with family and friends. I initially whipped up the website largely from scratch in about 2 weeks, leveraging the collection information I was storing in Evernote (and its feature to export basic webpages). I thought it would be fun to make it colorful, but I did not put in much effort regarding the design. The 2 weeks was mostly writing all the stories from scratch, taking pictures of my collection, and searching for images and video links for each of the items.

My friends and family all seemed to enjoy the content, but a few of them found the design amateurish, kiddy-looking vs. professional, with too many words / too few images, and mostly tacky looking when it came to the colors and visuals. I was contemplating this feedback 6 months later, when I wondered if I could just contract a web design firm to modernize its design. I found one and contracted them, and their initial designs were pretty cool (modern colors, animations, etc.). But in the end, they struggled with my requirements, including supporting the webpages generated by Evernote for minimal maintenance by me. And in April 2023, I finally gave up on trying to work with them 😟.

robot on design board

Then, that same month, I was playing with the power of ChatGPT, and I started asking it what was involved in creating a modern, but relatively basic website design in terms of colors and structure. Then, I asked it to generate the base HTML, CSS, and JavaScript for the main page (and other page templates). It was able to do so much of the work for me that I then asked it to do many other things.

It provided a lot of the Python code for me - including providing the code to use a Python library, Beautiful Soup, I had never used before - to decrypt the exported HTML from Evernote (made so convoluted in their attempt to maintain precise formatting under any condition). Once my script decoded the HTML, I was able to format those 4 collection pages more in line with the rest of the design 😀! The whole revamp took me about 2 weeks in all (including new content), but it easily would have been 6 weeks without all the help from ChatGPT!

bondibot.com site revamp

People that have seen the revamp seem to appreciate the improvement in aesthetics and usability for both mobile and desktop browsers 😀, despite not having any slick features.

robot on laptop

Not to mention that I used the cool DeepFloyd IF (i.e., IF on bottom right) and Adobe Firefly (i.e., Adobe logo on bottom left) AIs to generate all the images on this page 😀! It took a few iterations of my prompting what I was looking for.


Which of Your Robots Use AI?

Vector social desktop companion Sony Aibo ERS-1000 social robot dog Loona social pet

Just a few of them - and only narrowly. Most of my robots are not equipped with AI; they are simply programmed, without any machine learning, to perform specific tasks.

However, I have a few smart robots like NAO Power (humanoid bipedal robot), Aibo (pet dog companion quadrupedal robot), Loona (pet companion wheeled robot), Vector (desktop companion wheeled robot), and A1 Dog (athletic dog quadrupedal robot) that use narrow AIs to add limited intelligence to their behavior, including obstacle avoidance, speech recognition, speech synthesis, person recognition, object recognition, auto-balancing when walking/standing, and path planning.

Among them, Aibo could perhaps be considered the most advanced, utilizing AI in relation to its owner (and past interactions with them) to determine its behavior.


Do You Plan To Integrate the Latest AI Into Your Robots?

NAO v6 feature-rich humanoid robot

Yes, get ready for NAO 2.0! Before the recent advent of powerful AI models with APIs, I developed a pseudo-AI for NAO that gave the impression of life and intelligence.

This program named Life-Like involved giving canned responses to canned queries and executing hard-coded, randomized pseudo-autonomous behavior. One of the more convincing tricks is that it uses the robot's functions for person detection and person recognition to address the person by name and - when possible - identify the color of their shirt/blouse. Also, if they were never seen before, it prompts them for their name.

With the available GPT 3.5/4 APIs, I am now planning on integrating with the faster GPT 3.5 API. This will help in making more intelligent (and less repetitive) decisions on the autonomous behavior activities, and facilitate complex, unscripted conversations with humans, based on previously stored data about them and their interactions with NAO.


Which AI Apps Do You Recommend I Try?

AI apps

Just the ones I use. I am only recommending these apps that I tried, liked, and continue to use.

ChatGPT ChatGPT: From OpenAI, this chatbot excels at understanding natural language requests for research, coding, brainstorming ideas, and creating all forms of content, and then quickly responding with relevant, detailed, nuanced answers. I use it virtually every day.
Bing Chat Bing Chat: From Microsoft, this has recently become my preferred search tool over Google, as it combines the capabilities of ChatGPT with real-time internet search access. It can also generate AI images!
Claude Claude: From Anthropic, this chatbot is giving ChatGPT a run for its money!
Bard Bard: From Google, this chatbot is improving and becoming more comparable to Bing Chat, but it's not quite there for me to use it frequently yet.
Pi Pi: From Inflection AI, I highly recommend it for anyone who is lonely, or just wants to explore their deepest thoughts with a compassionate, intelligent conversationalist. To maintain a long-term topic, make sure you continue with the chat at least once within every 24 hours.
Perplexity Perplexity: From Perplexity AI, this is a powerful research engine (not really for chat) like Bing Chat, that seems more accurate with its results due to its special training and bad information filtering.
Designer Designer: From Microsoft, this is a design tool that uses AI to help you design presentations, cards, signs, and other marketing-type media - including generating any needed AI images.
Firefly Firefly: From Adobe, this text-to-image AI works like the other text-to-image generators, but it distinguishes itself by only being trained on royalty-free images.
Leonardo.ai Leonardo.ai: From Leonardo.ai, this is one of the most capable text-to-image generators with a decent amount of free credits each month.
Hugging Face Hugging Face: This is a free site with lots of open source and trial-ware AI software.

Consider the current chatbot limitations. As of the time of this writing, the public version of ChatGPT utilizes the older GPT-3.5 and does not have real-time internet access. On the other hand, ChatGPT Plus provides access to GPT-4 (which is significantly more intelligent than GPT-3.5) but with slower output, and also without internet access. Bing Chat, in contrast, uses an earlier version of GPT-4 but with internet access.


Where Can I Keep Up With AI News?

robot reading news

YouTube is where I go. I am only recommending these YouTube channels that I am subscribed to.

AI Explained AI Explained: Great detailed explanations with thoroughly researched content.
Dr. Alan D. Thompson Dr. Alan D. Thompson: World-renowned expert with lots of comparisons and analyses for all of the AIs out there.
Machine Learning Street Talk Machine Learning Street Talk: Deep discussions into the heart of AI.
Lex Fridman Podcast Clips Lex Fridman Podcast Clips: Profound discussions with the top experts.
Two Minute Papers Two Minute Papers: Practical, concise reviews of the latest scientific papers with lots of show-and-tell.
Matt Wolfe Matt Wolfe: Giving you the TLDR (too long; didn't read description) for all the latest news with detailed user demonstrations.
MattVidPro AI MattVidPro AI: A deep dive into using the latest models.
Bondi Bots & Bricks My AI playlist: All the AI videos that I tagged as Liked.

Final Thoughts

silhouette of man staring into sunset

Try it, and you will like it 😀. As you can see, I am passionate about the current value and potential of AI, and I encourage you to take a little time to try out some of the apps above - especially since they are mostly free at this time. Despite the news often focusing on the negative aspects or potential risks of AI, I urge you to allow yourself to concentrate on the positive aspects, such as how it can enhance your productivity now and the incredible potential it holds for benefiting society in the future.