|
|
Line 79: |
Line 79: |
| </blockquote> | | </blockquote> |
|
| |
|
| == Privacy concerns of online AI models == | | ==Privacy concerns of online AI models== |
| There are several concerns with using online AI models like [[ChatGPT]], not only because they are proprietary, but also because there is no guarantee to where your data ends up being stored or used for. | | There are several concerns with using online AI models like [[ChatGPT]] ([[OpenAI]]), not only because they are proprietary, but also because there is no guarantee to where your data ends up being stored or used for. Recent developments in local AI models are an alternative to these online AI models, as they work offline once they are downloaded from platforms like HuggingFace.<ref>https://huggingface.co/</ref> Common models to run are like Llama ([[Meta]]), DeepSeek ([[DeepSeek]]), Phi ([[Microsoft]]), Mistral ([[Mistral AI]]), Gemma ([[Google]]). |
| | |
| Luckily there is an alternative which solves many of these concerns, which is to run AI models locally. There currently exist different models that are small enough to run on a personal computer. Those models are indicated with a smaller parameter size, for instance models with 1.5B or 7B parameters. If the computer has a relatively modern GPU, it can also run one of the larger models for more accurate answers, as these models have GPU-acceleration. The software that will be recommended below runs on all major computer platforms (Windows/macOs/Linux). Be cautious if you download other kinds of models besides the major models, as platforms like HuggingFace allow anyone to upload.<ref>https://huggingface.co/</ref>
| |
| | |
| === LM Studio ===
| |
| One of the easiest software to start with to run these models is LM Studio.<ref>https://lmstudio.ai/</ref> It is user-friendly as it has a graphical user interface aimed at beginners, and allows you to get started with just a few clicks. It recommends appropriately sized models for your specific computer hardware, and manages the rest of the installation for you. In terms of storage, you will need a few gigabytes to store the models locally, which you only have to do once. With the models installed, no further internet connection is required.<ref>[https://www.youtube.com/@NetworkChuck NetworkChuck]: [https://www.youtube.com/watch?v=7TR-FLWNVHY The only way to run deepseek]</ref> The software allows opening chats with the large language model, which you can also organize into folders.
| |
| | |
| === Ollama ===
| |
| If you are fine with just using the terminal, another option is to install software like Ollama.<ref>https://ollama.com/</ref> Once installed, you can simply invoke the run command with the model you want to use, and it will download that model if it has not done that already. The website lists the most common models to run, like Llama ([[Meta]]) DeepSeek ([[DeepSeek]]), Phi ([[Microsoft]]), Mistral ([[Mistral AI]]), Gemma ([[Google]]). If you are a more advanced user, you can also run Ollama inside Docker.<ref>https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image</ref> That allows isolating the model completely from your host system, which may be what you want to be extra secure.
| |
|
| |
|
| ==References== | | ==References== |