[JM]: One of the things that I run into sometimes is I need to change some text in a file or in a directory full of files, matching some pattern where there's some word or phrase and I want to replace it.
[JM]: Sometimes it could be a misspelling, a capitalization change, whatever it is, and I find that
[JM]: That's relatively easy to do in most text editors if it's like a single file.
[JM]: But I'm curious, Dan, how often do you come across this problem and what method do you usually use to solve it?
[DJ]: I would say I come across that problem occasionally, not super frequently.
[DJ]: When I do come across it, yeah, I usually use a text editor.
[DJ]: I've been using Visual Studio Code as my basic text editor for a number of years, and it has quite a good interface for doing find and replace across one or more files.
[DJ]: I started using Vim more recently, which is a whole other topic for 10 future shows, probably.
[DJ]: And I've figured out how to find and replace in a file there, but not necessarily in a lot of them.
[DJ]: But yeah, it's generally been having to lean on a text editor that has find and replace built in.
[JM]: That's how I do it as well.
[JM]: I find that in Mac Vim, for example, which is the GUI application that I use most of the time when I'm at my workstation, there is a find and replace feature that's built into it that makes it fairly easy to find and replace within the active file that you are editing.
[JM]: But if you ever want to do a recursive find and replace, that's not something that is, I think, super easy to do in a lot of applications, including the one that I'm using.
[JM]: And that's why I found it interesting to have come across Scooter, which is a TUI or terminal user interface application to do recursive find and replace in an interactive way.
[JM]: In the past, when I feel like I need to do searching and replacing across multiple files, I've created scripts to do this, where I use some combination of ripgrep and SD, which is something like sed.
[JM]: And this technique works pretty well, but then I have to write a script to do it or come up with some relatively clunky single line invocation.
[JM]: And I feel like Scooter fills a gap
[JM]: when it comes to addressing this kind of problem.
[JM]: And like you, I don't really have much need to do recursive search and replace, most of the time I'm trying to do it in a single file.
[JM]: But one use case that occurs to me is generating transcripts for this podcast, because starting at about episode 23, I started generating transcripts, but episodes one through 22, for example, still do not have transcripts.
[JM]: Well, I can go and generate those from our past shows,
[JM]: but inevitably they're going to be things that they get wrong.
[JM]: They meaning the transcription will get certain things wrong.
[JM]: And whereas when I'm editing a single transcript, yes, I can just use MacVim and do a search and replace and make those changes.
[JM]: But a tool like Scooter would allow me to say, correct a common misspelling that the transcription is getting wrong.
[JM]: and replace it across all of those episodes in one go.
[JM]: So I'm looking forward to finding admittedly somewhat esoteric use cases for Scooter.
[JM]: And one more thing that occurred to me while thinking about it is this is probably also a useful tool on the server side because sometimes I might be
[JM]: changing one or more Docker compose files, for example, on a remote server, just to make a tweak to see whether it works before making the same change in a more proper way, you know, like in source control.
[JM]: And I think this tool could be useful for making those kinds of server side tweaks in much less time than the way that I've probably done it in the past.
[JM]: So thanks to Sean Hammond for recommending Scooter.
[JM]: And if you
[JM]: have any idea what any of this means, and you feel like it could address a use case for you, then give it a try.
[JM]: Okay, moving on, I would like to talk a little bit about Kagi.
[JM]: We have talked about Kagi in the past.
[JM]: So as follow up to what we've talked about in the past about Kagi, my friend Trey Hunter was gracious enough to gift me a three month trial to Kagi.
[JM]: And I have to give it to Kagi.
[JM]: This is really a clever idea because the way that they did it before is you could sign up and you got a hundred searches for free before you would need to start paying if you wanted to continue to use it.
[JM]: So me being me, the fact that I only had 100 searches meant that I felt like I needed to use them wisely.
[JM]: So I didn't really use them, which means I basically forgot that Kagi exists and just moved on with my life.
[JM]: But as a Black Friday promotion, Kagi cleverly allowed people who are paying customers to gift three month trials to people.
[JM]: as a way of getting them more broadly familiar with the value that it offers.
[JM]: And so for the last couple of weeks, I have been trying it on a daily basis and using it as my everyday search engine.
[JM]: And I have to say, I so far am quite pleased with it.
[JM]: I've been quite happy with DuckDuckGo.
[JM]: And so it's not that I necessarily had some problem that I was encountering that I needed to solve.
[JM]: But I do feel like in general that the results that I get from Kagi are better on average than DuckDuckGo.
[JM]: And so far, so good.
[JM]: We'll see how I feel about this after a couple more months.
[JM]: But if this continues the way that I think it will, then I will very likely be a paying customer.
[JM]: On a side note, it is beyond frustrating that Apple will not allow you to choose a default search engine that isn't on their default list for Safari.
[JM]: So the default list, which is Google, Yahoo, Bing, DuckDuckGo, and Ecosia, however that word is pronounced, if you want to use a search engine that's not one of those five as your default in Safari, tough luck, you can't.
[JM]: You have to choose one of those five.
[JM]: And so what Kagi is forced to do
[JM]: is they had to create an application extension for Safari, the sole purpose of which is to intercept searches for whichever search engine you chose as your default one from that list and then route that search to Kagi instead.
[JM]: It's ridiculous.
[JM]: that Kagi has to jump through these hoops in order to make this possible.
[JM]: And that we as users of our own computers have to then install this browser extension again, just to get around Apple's absurd stance on no, you don't get to control which search engine is your default.
[JM]: And another side note, that means you can't even search via
[JM]: the one that you've set as your default anymore in Safari because of course it will now automatically always get redirected.
[JM]: That's the only workaround.
[JM]: So if I want to do a DuckDuckGo search in Safari, I can't because it's going to get redirected to Kagi because that's the only way to make it work.
[JM]: And of course, it is this kind of behavior that convinces the legislators to enact regulations intended to force Apple to, well, stop being jerks and allow people to use their computing devices in a less restrictive manner.
[JM]: But aside from that, I'm really happy with Kagi.
[DJ]: Which again, none of this is actually Kagi's fault.
[DJ]: Sounds like your gripe is really not with Kagi, obviously, but is with the maker of the browser that you use for now.
[DJ]: Shots fired, Apple.
[DJ]: You're putting them on notice.
[DJ]: Yes.
[JM]: On a somewhat related note, I saw today that Brett Terpstra...
[JM]: who has written many interesting automation tools that I've used over the years, posted the following on the Fediverse.
[JM]: Project idea, a browser extension that lets you report AI slop sites and based on crowdsourced reports, flag sites in search results known to be AI generated.
[JM]: It's insane that even if I'm not searching with AI, which I never do, I still have to visit a series of sites
[JM]: for 30 seconds each only to quickly realize someone just posted my search term into chat GPT and then publish the result.
[JM]: It's half the first page of results before you get to something written by a human and the search engines clearly aren't interested in solving this.
[JM]: I would love a report AI slot button in my browser and a badge on AI sites and search results.
[JM]: It would save me time plus there would be a certain amount of satisfaction in clicking that button rather than just screaming into the void.
[JM]: Well, I don't think that Brett is a Kagi user because if he were, he would know that about a month ago, this now exists for users of Kagi.
[JM]: From their related blog post on this topic, all Kagi search users can now flag low-quality AI content, otherwise known as AI slop, in web, image, and video search results.
[JM]: We will verify these reports using our own signals.
[JM]: If a domain primarily publishes AI generated content, we will downrank it in Kagi search and market as AI slop.
[JM]: If a page is AI generated, but the domain is mixed, not mostly AI, we will flag the page as AI generated, but will not downrank it.
[JM]: For media results, images and videos confirmed as AI generated, they will be labeled as such and automatically downranked on the results page.
[JM]: Users can also choose to filter out AI generated media entirely.
[JM]: So I did not know this until I saw replies to Brett's post and thought, well, cool.
[JM]: Yet another reason that I should continue to use Kagi and I look forward to using this if and when I come across AI slop search results in Kagi's results, which hopefully I won't because other people are already marking it as garbage and thus I won't see it.
[DJ]: I'll be really interested in how that plays out.
[DJ]: It's not a bad way on the face of it to try to let users have some sort of agency over what they see in search, right?
[DJ]: Like crowdsourcing and moderation like that can certainly be abused potentially, but I think it's better to have such a system than not, right?
[DJ]: Like if it's crowdsourced, you're accountable to
[DJ]: a community and that has upsides and downsides.
[DJ]: If it's not crowdsourced, though, you're just accountable to the company running the search engine, which is, if anything, potentially less transparent.
[DJ]: So it's cool that they are even trying an experiment like that.
[DJ]: I'm not aware of any other search engines in the space doing it.
[DJ]: I am also hopeful that.
[DJ]: But what you don't know, Justin, is I actually asked your friend Trey to give you this three-month trial because it's all part of my long game.
[DJ]: Because my hope is that by next Black Friday, as a satisfied Kagi customer, you will then give me a three-month free trial so that I can try Kagi.
[DJ]: That is some fourth dimensional chess right there.
[DJ]: That's right.
[DJ]: That's right.
[DJ]: I do actually, to go all the way back to the beginning of this segment, though, I do actually think that change in the trial model is really interesting because I immediately got what you said when you said that because you only had 100 searches, you were searching less because each one feels dear to you.
[DJ]: It's that idea of like scarcity, right?
[DJ]: And I used to feel that way back in the 14th century when there was such a thing as film cameras.
[DJ]: When I'd go on vacation, I didn't have a smartphone that could take infinity pictures.
[DJ]: I had generally a disposable camera that had 36 shots in it, period.
[DJ]: And then it was done.
[DJ]: And as a result, I would almost never take pictures because at no given point on that vacation did it ever feel like, well...
[DJ]: Is this moment really special enough to spend one of my 36 photos on?
[DJ]: And then on the last day on the way to the airport, you're snapping all these not very good photos, and that's all you have to remember your trip.
[DJ]: I was immediately reminded of it with that notion that, you know, not to armchair quarterback Coggy's marketing decisions, but clearly they changed this one.
[DJ]: In retrospect, at least, saying, well, you only have 100 searches, well, that's not a good idea, really, because what you want the person to do is search a lot.
[DJ]: So they really learn to love your search engine.
[DJ]: So that being the case, the time-based trial really does make more sense, right?
[DJ]: It's like, OK, look, you have three months.
[DJ]: Use the heck out of this, right?
[DJ]: So you're going to be searching on it all the time.
[DJ]: to make the most of your trial.
[DJ]: Whereas if they said, well, be careful, Justin, this is search 65 of 100.
[DJ]: You're like, oh no, do I really need to know about like how mozzarella is made today?
[JM]: Yeah, and that only works when you allow paying customers to gift it to someone else.
[JM]: Because if you just allowed everyone who signed up for an account to get free 90 days worth of Kagi searches, well, people would just create new accounts every three months with different email addresses and Kagi would be losing money left and right.
[JM]: So yeah, I agree that this is a much more effective idea than the one they originally pursued.
[JM]: One last note, I saw someone respond in this thread saying the way that I've dealt with AI slop is to go to Google and use the additional search term before colon 2023 with the idea being that that's sort of when the AI slop problem started.
[JM]: And so that's great if the thing you're trying to search for is not say super current.
[JM]: And obviously this is only useful for things that existed before 2023.
[DJ]: This is the point where if we could license music for this show, Metallica's Sad But True would start playing mournfully.
[JM]: All right.
[JM]: In other news, Mozilla has a new CEO.
[JM]: And I don't have a lot to say about this particular bit of news.
[JM]: We've talked about Mozilla in the past.
[JM]: We've talked about some of their not-so-great recent decisions, like putting more and more generative software features into their browser.
[JM]: But there was a quote from his first interview as CEO that I thought was really indicative of why I think we and other people have so many problems with the way that Mozilla is choosing to be managed.
[JM]: And that quote from the article is, the new CEO says he could begin to block ad blockers in Firefox and estimates that would bring in another $150 million dollars.
[JM]: but he doesn't want to do that.
[JM]: It feels off mission, end quote.
[DJ]: Oh, it does?
[DJ]: I'm glad that the new CEO of Mozilla has at least heard of why that company exists and why people care about it, that he's at least glancingly familiar with why people might use Firefox enough to know that maybe denying ad blocking technology in the browser would turn off a lot of their users.
[JM]: I just find this such a bizarre way to communicate as CEO.
[JM]: It would be like if Nike said, well, we could use four-year-old child laborers to assemble our shoes and we would make millions and millions of dollars in the process, but that wouldn't align with our company values.
[JM]: Okay.
[JM]: great, but why mention it in an interview?
[JM]: I don't understand.
[JM]: That doesn't make a lot of sense.
[JM]: If it's not in your mission, if it doesn't align with your values, why are we talking about it?
[DJ]: Yeah, there's such a thing as I've seen this put as like saying the quiet part out loud.
[DJ]: I was reading a post recently by a writer named Ed Zitron, who does really amazing deep dives into analyzing tech companies.
[DJ]: And he was writing about NVIDIA, which is obviously an interesting case study in late 2025, since they've become like the largest company that has ever existed in the world, at least in terms of market capitalization.
[DJ]: And the post is full of these statements that NVIDIA has been making.
[DJ]: Basically, to summarize it, NVIDIA is like, listen, we're not at all like Enron, you know, notorious instance of fraud from the beginning, turn of the century.
[DJ]: And Ed is like, uh...
[DJ]: Yeah, cool.
[DJ]: I mean, no one mentioned Enron NVIDIA, so it's kind of weird that you brought it up.
[DJ]: And it's that same kind of thing where it's like, wait a minute, if you know that, like, yes, this would make us more money, but we're not going to do it, why go out of your way to say, at best, it's this weird form of almost virtue signaling where it's like, you know, I could do something bad, Justin, but I know that I shouldn't.
[DJ]: To which your response should be, cool, Dan.
[DJ]: I mean, I guess I'm glad you have...
[DJ]: any moral compass at all.
[DJ]: That's very impressive.
[DJ]: It's a low bar is what I'm saying.
[JM]: Yeah.
[JM]: And it also feels somewhat threatening to me.
[JM]: It's like a defense of putting all of this generative software stuff in the browser.
[JM]: It's like, okay, you know, yeah, we're putting generous stuff in the browser, but
[JM]: you know, we could do this other really evil thing, but we're not going to.
[JM]: So maybe y'all should stop complaining about what we're doing with the whole generative stuff in the browser.
[JM]: It's like, I don't know.
[JM]: That's a weird way to try to defend what you're doing is to say, well, we could be doing this other bad thing that you don't want us to do.
[JM]: I don't know.
[JM]: Maybe I'm just reading too much into the statement, but.
[JM]: I don't seem to be the only person who found this part of the article to be, well, not confidence inspiring in terms of this new leader of Mozilla.
[DJ]: No, but I think you actually raise a really good point about that sort of threatening this form of playing defense where.
[DJ]: hey, we're taking this company in a direction that a lot of its fans don't like.
[DJ]: And instead of addressing that directly and maybe trying to come to some kind of terms, basically saying, well, you should just be thankful we're not doing this even worse thing.
[DJ]: That's a bad justification.
[DJ]: It does sort of remind me of on an episode a long time ago, we talked about some other thing that Mozilla did.
[DJ]: It had to do with like privacy tracking or ad tracking or something like that.
[DJ]: And as I recall, what we came to was the actual behavior they had done was not as bad as the way they'd communicated about it.
[DJ]: And this feels like another example of that, where it's like, yeah, I mean, this company is gonna do what they're gonna do.
[DJ]: I think a lot of us don't like what they seem to feel they need to do.
[DJ]: But then on top of that, they're not doing a good job of trying to bring us along.
[JM]: No, and I have a lot of affinity for Firefox given its origins in terms of Netscape before it and Mosaic before that.
[JM]: Mosaic was the first browser I ever used.
[DJ]: I think Mosaic was basically the first browser at all.
[JM]: Certainly the first popular one.
[JM]: I don't recall if it was the very, very first one.
[JM]: So I have a lot of affinity for Firefox just from a nostalgia standpoint.
[JM]: But every day, every time Mozilla seems to do anything, that goodwill just gets eroded to the point where I'm just not interested in using it anymore.
[JM]: One of the comments, by the way, that I loved regarding this new CEO is the following comment.
[JM]: Post, quote, Mozilla has a new CEO who has been at Mozilla for less than a year, has no prior open source experience, but well in fintech and real estate, has an MBA, aka brainworm diploma, is all in on AI.
[JM]: That's exactly the kind of bingo profile the whole community has been waiting for, end quote.
[JM]: Yeah.
[JM]: And a brainworm diploma?
[JM]: Ouch, I feel attacked as someone who has an MBA, but you know what?
[JM]: Also, fair.
[JM]: I kind of dig it.
[JM]: A moniker, perhaps well-earned.
[JM]: All right, in other news, there is a
[JM]: new large language model that I want to bring to your attention because it is one of the first fully open large language models that you can download and run on your own computer that is quite competitive with other large language models that you could download and run on your computer that are much less open.
[JM]: And this model is called OLMO, O-L-M-O,
[JM]: And OMO3 was released last month by Allen AI, which is a nonprofit institute founded by the late Paul Allen, a founder of Microsoft.
[JM]: And when I went to pull up the site in preparation for the show today, I realized I noticed that they have released an update to it just a few days ago, making it even more
[JM]: more effective across a lot of benchmarks than it was just a couple of weeks prior.
[JM]: It's available in both thinking or reasoning variants, as well as an instruct variant, which is more useful for chatbots and quick responses.
[DJ]: Instruct is just prompt goes in, output comes out without like a so-called iterative reasoning loop.
[DJ]: Is that right?
[JM]: That's right.
[JM]: That chain of reasoning loop that you've probably seen from reasoning models is part of the think variant, not the instruct variant.
[JM]: Gotcha.
[DJ]: Okay.
[DJ]: One of the things I've always had a tricky time parsing in the world of LLMs is these various words that end up being used, and then they just kind of become part of the lexicon.
[DJ]: So it's like, well, this one's a mixture of experts model.
[DJ]: And it's like, wait, what does that mean?
[DJ]: I've looked into it.
[DJ]: I sort of know what it means, but still it's...
[DJ]: you know, think versus instruct.
[DJ]: It's like, okay, those are verbs, but what does that actually mean the thing is doing?
[JM]: Those single words don't really convey a lot of semantic meaning.
[JM]: I agree.
[JM]: And sometimes I have to look up to see like, oh, right.
[JM]: That's the kind of task you would usually use this particular variant for.
[DJ]: That's the thing I'm always curious about is that there's these model announcements and there's a little part of me that wants to go like tug on the person's sleeve and be like, but what do I do with this?
[JM]: And this latest Ulmo 3.1 release is available in 32 billion parameter sizes, which means that if you have a computer with, I would say, roughly 48 to 64 gigabytes of RAM, perhaps.
[JM]: then it should run just fine.
[JM]: I haven't actually tried this on a 32 gigabyte machine, so I don't know for sure.
[JM]: I'm kind of just guessing.
[JM]: But my initial experimentation with it on this M3 Ultra Mac Studio is that it is quite good and it
[JM]: isn't going to benchmark as well as say the latest from Quen 3, but Quen 3 is not a fully open model.
[JM]: And so it depends somewhat on what it is that you value in what you're doing.
[JM]: If you want the absolute best model to run locally,
[JM]: Olmo 3 might not be it, but you have the, I suppose, good feeling of knowing that this is, as far as I understand it, the most fully open model that you can run on your own hardware these days.
[DJ]: When we say open in this context, what is the difference between Olmo and Quen?
[DJ]: Like which part is more or less open and what are the implications?
[JM]: I'm a little fuzzy on some of that.
[JM]: So I will just give you my very basic understanding.
[JM]: And that's that most of the models that you can download and run on your own hardware are what are referred to as open weights models.
[JM]: Whereas with Ulmo, it is, as I understand it, not only open weights, but also open source.
[JM]: So the code used to train the model is also fully available.
[JM]: The implications of that are really up to you, like how much you value the openness and transparency
[JM]: of say, Ulmo or some other similarly open model versus one that has less transparency.
[JM]: And if you go to the European Open Source AI Index, you can see a ranking of various downloadable models based on how open they are.
[JM]: And I imagine this site, and as usual, there will be a link in the show notes, will provide you with perhaps more detail regarding the questions that you're asking.
[JM]: On this European Open Source AI Index site, if you hover on the bar underneath the models in this list, it will say things like availability, base model data, end user model data, base model weights, end user model weights, training code, code documentation, hardware architecture, preprint, paper, model card, data sheet, et cetera, et cetera, indicating how much transparency you have into how this model was trained.
[DJ]: The tricky thing when it comes to evaluating what do we want to use when it comes to large language models is parsing this dense web of language, right?
[DJ]: Which I guess is appropriate for a technology that parses language.
[DJ]: Because so I've been trying to understand this and listeners to this show who are very familiar with the world of LLMs are probably throwing the internet connected radio that they're using to listen to this podcast out the window at this point.
[DJ]: But my understanding is.
[DJ]: To create a large language model, you perform what we call training.
[DJ]: And to use a large language model, you perform what we call inference.
[DJ]: That's the word that generally describes you give an input to a model, the model compares it to its statistical model and produces an output.
[DJ]: So I think what I've been trying to understand correctly is this notion of weights.
[DJ]: Are the weights essentially the values in that statistical model?
[DJ]: And therefore, if you have an open weight model, you have some insight, not that it's necessarily meaningful insight, but you have insight into the values being used to transform the input into the output?
[DJ]: Or are the weights being described actually what's involved in the training step of the model?
[DJ]: I think the weights are the output of the training, aren't they?
[DJ]: Yeah.
[DJ]: I'm not sure.
[DJ]: We'll have to just have people write in or they can reply on the Fediverse and tell us that we are foolish.
[JM]: That's right.
[JM]: We do not know everything.
[JM]: So if you know some stuff, then by all means share it with us.
[JM]: Pull requests welcome.
[JM]: Okay, moving on, I would like to talk a little bit about unraid.
[JM]: Unraid is a solution to a problem that I had years ago, where I would have a bunch of files that I wanted to keep around for a long time, and a decent number of them in terms of storage space.
[JM]: So there were enough that they wouldn't fit on a single drive.
[JM]: And even if they did fit on a single drive, I wanted some kind of redundancy because drives fail.
[JM]: And I didn't want to lose all that data.
[JM]: So back in the day, I remember buying a five bay drive enclosure in which I put five, I think it was 120 terabyte drives.
[DJ]: Gigabyte.
[DJ]: It definitely wasn't 120 terabyte drives.
[DJ]: Sort of step on your anecdote there, buddy.
[DJ]: Get the facts straight.
[JM]: Clearly been a long day.
[JM]: Yes.
[JM]: 120 gigabytes.
[JM]: Thank you for the correction.
[JM]: The problem is, as soon as I exceeded the amount of space in this enclosure, I couldn't just pull out one of those drives and put in a bigger one because this was a RAID 5 array and each of the drives have to be the same size.
[JM]: Not too long after that, I came across a project called Unraid.
[JM]: And Unraid allows you to do what up until that point for me was just a pipe dream, which is put a bunch of drives in some enclosure that don't all have to be the same size.
[JM]: And if you need to upgrade your storage, you can either just add a new drive, you can replace a drive with a bigger one.
[JM]: you can essentially assemble a bunch of disparate drives of varying sizes and get a whole bunch of flexibility in terms of managing your storage.
[JM]: So I have been using Unraid for I don't actually know how many years.
[JM]: It's at least a decade.
[JM]: It's probably approaching 15.
[JM]: And I have been really happy with it.
[JM]: It is the brains of the machine that runs in a closet downstairs, in addition to being
[JM]: a storage server, it also runs a bunch of applications with direct access to the files on that storage array, which means that when I want to relax on the sofa and watch a movie or a TV show, I am pushing play in Plex, which runs on this Unraid powered computer and serves me those files so that I can watch them whether I'm on the sofa or on an iPad on the other side of the planet.
[JM]: It also runs my home assistant virtual machine
[JM]: for all of my home automation, controlling all the lights in the house.
[JM]: So if I walk into a room, I have millimeter wave sensors that can sense movement and they turn the lights on automatically.
[JM]: I could go on and on about the various applications that I have running on this thing, but it's nuts when I think of how little I've spent to make this thing work.
[JM]: I bought this ancient Dell server
[JM]: For $100, I believe, the Unraid license was minimal.
[JM]: And it was really just a question of the drives.
[JM]: That's really been my only expense other than the electricity required to keep the machine running.
[JM]: And honestly, I just couldn't be happier with it.
[JM]: And I think Dan, you've been experimenting with Unraid as of late, have you not?
[DJ]: I have, yeah.
[DJ]: I've always wanted to have a kind of central home server in addition to whatever else I was doing with computers.
[DJ]: Something that could be in my house, on my network, always running, or almost always, and provide access to a bunch of files and services.
[DJ]: So that whatever else I'm doing with other devices, I can get access to them, whether I'm on my phone somewhere else and want access to a file or want to stream media that is my media that I own.
[DJ]: and have control of, as opposed to having to stream it from some external service.
[DJ]: And so I've taken little steps in this direction.
[DJ]: And most recently, I built a Windows gaming PC last year.
[DJ]: I just had that always running, and I started hosting some services
[DJ]: On that, you know, hosting them in Docker containers, which on Windows you have to do using Windows subsystem for Linux, which works, but it's all a little weird.
[DJ]: And thus far, and I'm only about a week or so in, Unraid makes this much easier because the whole system is optimized around setting up a bunch of storage hardware and then setting up a
[DJ]: living on that storage hardware, but obscure it so that the applications running above them don't really have to know what your disks are doing.
[DJ]: And then deploying a bunch of applications in containers, which is yet another layer of abstraction,
[DJ]: And granted, when things go wrong, all of these abstraction layers can make things complicated.
[DJ]: But on the happy path, when this works, and thus far it mostly works, it works really smoothly.
[DJ]: So like I set up this server, I copied the contents of my old server over to it, and now I bought some new bigger drives so I have a bunch of headroom.
[DJ]: And for things like media, like my collection of movies and music, I set up Jellyfin, which is another open source media.
[DJ]: Well, it's an open source media streaming server as opposed to Plex, which is not open source, but which I have used for a long time.
[DJ]: But I decided to try something new.
[DJ]: I set up Jellyfin and was streaming movies to my TV extremely easily.
[DJ]: There was barely any configuration to do it.
[DJ]: Almost all just works out of the box, which was very pleasant.
[DJ]: So I'm really looking forward to continuing to build on this system.
[DJ]: And I have a bunch of stuff still to figure out, like, you know, properly securing user permissions and things like that before I expose...
[DJ]: File share is on my network.
[DJ]: How do I use this thing from outside my house?
[DJ]: What are the other services I want to run on it?
[DJ]: How do I back all this stuff up so that I don't lose all of this data if a drive fails?
[DJ]: And to that point, and maybe from the very originating point of this discussion, I am still learning exactly like.
[DJ]: How does this disk array work with regard to redundancy and all that stuff?
[DJ]: So the most important thing about this, Justin, is that I love tinkering with computers.
[DJ]: I think I've finally recognized that that is my core hobby.
[DJ]: And this project is giving me endless opportunities for tinkering.
[DJ]: And so I'm very happy with it.
[JM]: That's cool.
[JM]: As a longtime Unraid user, I am always happy to find other folks who come across it and who get value out of it.
[JM]: So...
[DJ]: Yeah, well, and I learned about it from you, to be clear.
[DJ]: Like, you mentioned this in some context a couple of weeks ago, and I rapidly hurled myself down the rabbit hole.
[DJ]: So you are to blame, and thank you.
[DJ]: You're welcome, and I'm sorry.
[JM]: All right, folks, that's all for this episode.
[JM]: We will be taking a couple of weeks off, but we'll be back in early January.
[JM]: You can find me on the web at justinmayer.com, and you can find Dan on the web at danj.ca.
[JM]: Reach out with your thoughts about this episode via the Fediverse at justin.ramble.space.