Get to Know AI
What is Artificial Intelligence (AI)?
Imagine your brain. Now, toss in some silicon, circuits, and a sprinkle of computer code fairy dust. Voila! You've got the techy sibling of your brain, known as Artificial Intelligence.
Ok, it’s not quite that simple. Artificial intelligence—commonly called AI—refers to the simulation of human intelligence by networks of machines programmed to think and act like humans.
AI isn’t some far-off sci-fi dream… it’s actually here, among us, right now. In fact, you’ve probably been interacting with AI for years without really knowing it.
From internet search engines to social media loading your feed with irresistible cat videos (guess how they knew you’d click on that), AI has been working behind the scenes for years, absorbing vast amounts of data, learning, and helping humans to solve ever more complex problems.
While AI isn’t new, it’s a quickly advancing technology that has experienced some recent breakthroughs, leading to dazzling new applications like ChatGPT. To understand these breakthroughs, it helps to understand the differences between the two main branches of AI.
This includes discriminative AI that analyze and classify data and generative models, like Chat GPT-4 or Gemini, that can create new data.
Discriminative AI
For years, many industries and fields of study have been using discriminative AI tools to lighten their workload and boost efficiency. Discriminative AI tools are often built for specific tasks, and excel at classification and labeling within set boundaries. Discriminative AI is named for its proficiency in discriminating between data types and making decisions based on learned characteristics. While it can help draw conclusions and create predictions based on existing data, it does not “generate” new data.
One of the most common discriminative AI tools that you probably interact with daily are email spam filters. Developers provide spam filter models with loads of data, and mark the common patterns, markers, and characteristics that most spam mail shares. The model then classifies, labels, and halts spam mail from hitting your inbox.
Generative AI
While discriminative AI is built to make determinations and predictions within set boundaries, generative AI is designed to generate, or create, novel outputs. Generative AI identifies complex patterns in huge datasets, then uses predictive modeling to generate content that is based on those patterns.
Before generative AI tools can provide you with useful and relevant content, they first need to understand what you’re asking for. That’s why tools like ChatGPT, Gemini, and Midjourney have taken the world by storm: you can “talk” with them using everyday language rather than computer code.
The ability to prompt these tools with simple language is a massive leap in general accessibility because you no longer need to be a computer scientist to interact with them. That leap is due in large part to a powerful machine learning technology: Large Language Models.
What are Large Language Models (LLMs)?
Our human brains, composed of billions of specialized cells called neurons, are wired for language. Large Language Models are specialized AI algorithms that are programmed to mimic the neural networks found in our own heads. Although inspired by the architecture of the biological brain, LLMs learn and understand language in a very different way than humans do.
LLMs are trained on huge datasets to identify the statistical relationships between words, phrases, and other data points. The models then make predictions or generate text based on the patterns they have recognized.
On a mathematical level, LLMs understand the patterns that make up language, and can respond with language based on your input. This design allows you to converse with LLMs naturally.
For example, you can input simple commands or questions, like: “Summarize this article about the ecstasy of superb technical writing.” The tool analyzes the input in relation to the patterns of its vast training data, then predicts and generates natural language responses that seem statistically likely to provide you with what you’re looking for.
Instead of predetermined responses, LLMs can generate “new” responses that haven’t been written word-for-word by developers. That may seem a bit magical, but it all comes down to how the models are developed.
How are LLMs developed?
The exact details of LLM development are closely guarded trade secrets. While each company is doing it a bit differently, the process generally includes these steps:
Collecting Data: Gathering information for training from enormous datasets.
Preprocessing: Refining and organizing the data.
Choosing & Training a Model: Picking an algorithm and teaching it using the data.
Evaluation: Testing the model's performance on new data.
Deployment: Using the trained model in real-world applications.
Feedback Loop: Improving the model with more data and tweaking weights and values.
For a more detailed explanation, check out this article from Nvidia.
Since these models only function because of the information they’re trained on, you might wonder: Where does this information come from?
The Internet: An AI Training Ground
Generative AI tools developers get the majority of their training data from the vast archives of the internet. That includes billions of data points harvested from books, articles, webpages, and Wikipedia—which is often the largest source of training data for most models.
It's virtually impossible to remove every bias or misleading bit from their enormous datasets. As a result, these models can inadvertently propagate misinformation or prejudice. So, it's wise to approach their outputs with a critical eye—not only considering objective accuracy, but also the surrounding context, background, and possible omissions.
We know that the content generative AI tools provide is derived from patterns extrapolated from massive datasets, and that the data is produced by human authors.
In short, generative AI tools are using our data. But how should we use them?
How should generative AI be used?
It’s best to think of generative AI as a helpful assistant, not an all-knowing oracle that can complete your work for you. That said, there are many ways that generative AI might be able to assist you with the iFixit Technical Writing Project (iTWP).
There are ways to use generative AI at every stage of the writing process, including:
Brainstorming
Planning and Organizing
Revising
Editing, Proofreading, and Formatting
While generative AI tools can aid in the writing process, there are loads of other applications that you may find helpful in other aspects of your educational journey—from a study buddy to a personalized tutor.
Be sure to review your university’s policy and follow your instructor's guidelines on using generative AI. Just because these tools are widely available doesn't mean they should be used in every situation.
How shouldn’t generative AI be used?
While there are many ways that generative AI can be helpful, it’s on you to use these tools ethically and responsibly. To avoid getting into hot water:
Don’t depend solely on AI or blindly accept the answers it provides
Don’t pass off AI-generated text as your own
Don’t use AI to do your work for you
Don’t forget to always verify AI’s output, as the answers it provides are not always accurate or truthful.
You’ve probably heard that generative AI can present information as "facts" that aren't actually true. In other words, it can make things up. Developers of generative AI tools call this phenomenon “hallucinations.”
AI should be used to augment human learning, creativity, and agency—not replace them. This is why many AI experts stress the importance of keeping humans in the loop.
Humans in the Loop?
Keeping humans in the loop means ensuring that human oversight and intervention are part of the process when using AI. Why have humans review, supervise, and make decisions or adjustments as needed, rather than relying solely on AI to operate independently? Here are two good reasons:
Quality Control: Humans can assess the quality and appropriateness of the AI's output, ensuring that the results align with desired outcomes, especially in areas where the AI might make unexpected or inappropriate decisions.
Ethical Considerations: AI outputs might distort, fabricate, or leave out key information, resulting in a biased or inaccurate view of reality. It's important for humans to make decisions and weigh the ethical implications of how these new tools are deployed, especially since AI lacks awareness or understanding.
Keeping humans in the loop is essential, especially as these tools continue to advance. Both thrilling and groundbreaking, Generative AI is potentially one of the most powerful tools humans have ever created. It’s up to each of us to use it appropriately, now and in the future.