I've moved my SaaS I'm developing to SeaweedFS, it was rather painless to do it. I should also move away from minio-go SDK to just use the generic AWS one, one day. No hard feelings from my side to MinIO team though.
I ran a moderately large opensource service and my chronic back pain was cured the day I stopped maintaining the project.
Working for free is not fun. Having a paid offering with a free community version is not fun. Ultimately, dealing with people who don't pay for your product is not fun. I learnt this the hard way and I guess the MinIO team learnt this as well.
Completely different situations. None of the MinIO team worked for free. MinIO is a COSS company (commercial open source software). They give a basic version of it away for free hoping that some people, usually at companies, will want to pay for the premium features. MinIO going closed source is a business decision and there is nothing wrong with that.
I highly recommend SeaweedFS. I used it in production for a long time before partnering with Wasabi. We still have SeaweedFS for a scorching hot, 1GiB/s colocated object storage, but Wasabi is our bread and butter object storage now.
There's nothing wrong at all with charging for your product. What I do take issue with, however, is convincing everyone that your product is FOSS, waiting until people undertake a lot of work to integrate your product into their infrastructure, and then doing a bait-and-switch.
Just be honest since the start that your product will eventually abandon its FOSS licence. Then people can make an informed decision. Or, if you haven't done that, do the right thing and continue to stand by what you originally promised.
>Just be honest since the start that your product will eventually abandon its FOSS licence. Then people can make an informed decision.
"An informed decision" is not a black or white category, and it definitely isn't when we're talking about risk pricing for B2B services and goods, like what MinIO largely was for those who paid.
Any business with financial modelling worth their salt knows that very few things which are good and free today will stay that way tomorrow. The leadership of a firm you transact with may or may not state this in words, but there are many other ways to infer the likelihood of this covertly by paying close attention.
And, if you're not paying close attention, it's probably just not that important to your own product. What risks you consider worth tailing are a direct extension of how you view the world. The primary selling point of MinIO for many businesses was, "it's cheaper than AWS for our needs". That's probably still true for many businesses and so there's money to be made at least in the short term.
"Informed decisions" mean you need to have the information.
Like with software development, we often lack the information on which we have to decide architectural, technical or business decisions.
The common solution for that is to embrace this. Defer decisions. Make changing easy once you do receive the information. And build "getting information" into the fabric. We call this "Agile", "Lean", "data driven" and so on.
I think this applies here too.
Very big chance that MinIO team honestly thought that they'd keep it open source but only now gathered enough "information" to make this "informed decision".
Isn't this the normal sales anyhow for many products? One attracts a customer with unreasonable promises and features, makes him sign a deal to integrate, then issues appear once in production that make you realize you will need to invest more.
When you start something (startup, FOSS project, damn even marriage) you might start with the best intentions and then you can learn/change/loose interest. I find it unreasonable to "demand" clarity "at the start" because there is no such thing.
Turning it around, any company that adopts a FOSS project should be honest and pay for something if it does not accept the idea that at some point the project will change course (which obviously, does not guarantee much, because even if you pay for something they can decide to shut it down).
> I find it unreasonable to "demand" clarity "at the start" because there is no such thing.
Obviously you cannot "demand" stuff but you can do your due dilligence as the person who chooses a technical solution. Some projects have more clarity than others, for example the Linux foundation or CNCF are basically companies sharing costs for stuff they all benefit from like Linux or Prometheus monitoring and it is highly unlikely they'd do a rug pull.
On the other end of the spectrum there are companies with a "free" version of a paid product and the incentive to make the free product crappier so that people pay for the paid version. These should be avoided.
> then doing a bait-and-switch
FOSS is not a moral contract. People working for free owe nothing to no one. You got what's on the tin - the code is as open source once they stop as when they started.
The underlying assumption of your message is that you are somehow entitled to their continued labour which is absolutely not the case.
exactly
> Ultimately, dealing with people who don't pay for your product is not fun.
I find it the other way around. I feel a bit embarrassed and stressed out working with people who have paid for a copy of software I've made (which admittedly is rather rare). When they haven't paid, every exchange is about what's best for humanity and the public in general, i.e. they're not supposed to get some special treatment at the expense of anyone else, and nobody has a right to lord over the other party.
People who paid for your software don't really have a right to lord you around. You can chose to be accommodating because they are your customers but you hold approximately as much if not more weight in the relationship. They need your work. It's not so much special treatment as it is commissioned work.
People who don't pay are often not really invested. The relationship between more work means more costs doesn't exist for them. That can make them quite a pain in my experience.
They point to AIStor as alternative.
Other alternatives:
https://github.com/deuxfleurs-org/garage
https://github.com/rustfs/rustfs
https://github.com/seaweedfs/seaweedfs
https://github.com/supabase/storage
https://github.com/scality/cloudserver
Among others
I'm the author of another option (https://github.com/mickael-kerjean/filestash) which has a S3 gateway that expose itself as a S3 server but is just a proxy that forward your S3 call onto anything like FTP, SFTP, IPFS, NFS, SMB, Sharepoint, Azure, git repo, .... it's entirely stateless and act as a proxy translating S3 call onto whatever you have connected in the other end
Didn't know about filestash yet. Kudos, this framework seems to be really well implemented, I really like the plugin and interface based architecture.
That's a great list. I've just opened a pull request on the minio repository to add these to the list of alternatives.
While I do approve of that MR, doing it is ironic considering the topic was "MinIO repository is no longer maintained"
Let's hope the editor has second thoughts on some parts
I'm well aware of the irony surrounding minio, adding a little bit more doesn't hurt :P
Wrote a bit about differences between rustfs and garage here https://buttondown.com/justincormack/archive/ignore-previous... - since then rustfs fixed the issue I found. They are for very different use cases. Rustfs really is close to a minio rewrite.
Apart from Minio, we tried Garage and Ceph. I think there's definitely a need for something that interfaces using S3 API but is just a simple file system underneath, for local, testing and small scale deployments. Not sure that exists? Of course a lot of stuff is being bolted onto S3 and it's not as simple as it initially claimed to be.
What about s3 stored in SQLite? https://github.com/seddonm1/s3ite
This was written to store many thousands of images for machine learning
For testing, consider https://github.com/localstack/localstack
WAY too much. I just need a tiny service that translates common S3 ops into filesystem ops and back.
Would be cool to understand the tradeoffs of the various block storage implementations.
I'm using seaweedfs for a single-machine S3 compatible storage, and it works great. Though I'm missing out on a lot of administrative nice-to-haves (like, easy access controls and a good understanding of capacity vs usage, error rates and so on... this could be a pebcak issue though).
Ceph I have also used and seems to care a lot more about being distributed. If you have less than 4 hosts for storage it feels like it scoffs at you when setting up. I was also unable to get it to perform amazingly, though to be fair I was doing it via K8S/Rook atop the Flannel CNI, which is an easy to use CNI for toy deployments, not performance critical systems - so that could be my bad. I would trust a ceph deployment with data integrity though, it just gives me that feel of "whomever worked on this, really understood distributed systems".. but, I can't put that feeling into any concrete data.
We are using ruftfs for our simple usecases as a replacement for minio. Very slim footprint and very fast.
Had great experience with garage for an easy to setup distributed s3 cluster for home lab use (connecting a bunch of labs run by friends in a shared cluster via tailscale/headscale). They offer a "eventual consistency" mode (consistency_mode = dangerous is the setting, so perhaps don't use it for your 7-nines SaaS offering) where your local s3 node will happily accept (and quickly process) requests and it will then duplicate it to other servers later.
Overall great philosophy (target at self-hosting / independence) and clear and easy maintenance, not doing anything fancy, easy to understand architecture and design / operation instructions.
From my experience, Garage is the best replacement to replace MinIO *in a dev environment*. It provides a pretty good CLI that makes automatic setup easier than MinIO. However in a production environment, I guess Ceph is still the best because of how prominent it is.
Will https://github.com/chainguard-forks/minio hold the fork?
I just bit the bullet last week and figured we are going to migrate our self hosted minio servers to ceph instead. So far 3 server ceph cluster has been setup with cephadm and last minio server is currently mirroring its ~120TB buckets to new cluster with a whopping 420MB/s - should finish any day now. The complexity of ceph and it's cluster nature of course if a bit scary at first compared to minio - a single Go binary with minimal configuration, but after learning the basics it should be smooth sailing. What's neat is that ceph allows expanding clusters, just throw more storage servers at it, in theory at least, not sure where the ceiling is for that yet. Shame minio went that way, it had a really neat console before they cut it out. I also contemplated le garage, but it seem elasticsearch is not happy with that S3 solution for snapshots, so ceph it is.
Tangentially related, since we are on the subject of Minio. Minio has or rather had an option to work as an FTP server! That is kind of neat because CCTV cameras have an option to upload a picture of motion detected to an FTP server and that being a distributed minio cluster really was a neat option, since you could then generate an event of a file uploaded, kick off a pipeline job or whatever. Currently instead of that I use vsftpd and inotify to detect file uploads but that is such a major pain in the ass operate, it would be really great to find another FTP to S3 gateway.
This is timely news for me - I was just standing up some Loki infrastructure yesterday & following Grafana's own guides on object storage (they recommend minio for non-cloud setups). I wasn't previously experienced with minio & would have completely missed the maintenance status if it wasn't for Checkov nagging me about using latest tags for images & having to go searching for release versions.
Sofar I've switched to Rustfs which seems like a very nice project, though <24hrs is hardly an evaluation period.
Why do you need non-trivial dependency on the object storage for the database for logs in the first place?
Object storage has advantages over regular block storage if it is managed by cloud, and if it has a proven record on durability, availability and "infinite" storage space at low costs, such as S3 at Amazon or GCS at Google.
Object storage has zero advantages over regular block storage if you run it on yourself:
- It doesn't provide "infinite" storage space - you need to regularly monitor and manually add new physical storage to the object storage.
- It doesn't provide high durability and availability. It has lower availability comparing to a regular locally attached block storage because of the complicated coordination of the object storage state between storage nodes over network. It usually has lower durability than the object storage provided by cloud hosting. If some data is corrupted or lost on the underlying hardware storage, there are low chances it is properly and automatically recovered by DIY object storage.
- It is more expensive because of higher overhead (and, probably, half-baked replication) comparing to locally attached block storage.
- It is slower than locally attached block storage because of much higher network latency compared to the latency when accessing local storage. The latency difference is 1000x - 100ms at object storage vs 0.1ms at local block storage.
- It is much harder to configure, operate and troubleshoot than block storage.
So I'd recommend taking a look at other databases for logs, which do not require object storage for large-scale production setups. For example, VictoriaLogs. It scales to hundreds of terabytes of logs on a single node, and it can scale to petabytes of logs in cluster mode. Both modes are open source and free to use.
Disclaimer: I'm the core developer of VictoriaLogs.
See https://news.ycombinator.com/item?id=46136023 - MinIO is now in maintenance-mode
It was pretty clear they pivoted to their closed source repo back then.
Maintenance-mode is very different from "THIS REPOSITORY IS NO LONGER MAINTAINED".
Yes, the difference is the latter means "it is no longer maintained", and the former is "they claim to be maintaining it but everyone knows it's not really being maintained".
in theory "maintenance mode" should mean that they still deal with security issues and "no longer maintained" that they don't even do that anymore...
unless a security issue is reported it does feel very much the same...
Given the context is a for-profit company who is moving away from FOSS, I'm not sure the distinction matters so much, everyone understands what the first one means already.
AIstor. They just slap the word AI anywhere these days.
In French the adjective follows the name so AI is actually IA.
On AWS S3, you have a storage level called "Infrequent Access", shortened IA everywhere.
A few weeks ago I had to spend way too much time explaining to a customer that, no, we weren't planning on feeding their data to an AI when, on my reports, I was talking about relying on S3 IA to reduce costs...
Is that an I (indigo) or l (lama)? I though it was L, lol
We all saw that coming. For quite some time they have been all but transparent or open, vigorously removing even mild criticism towards any decisions they were making from github with no further explanation, locking comments, etc. No one that's been following the development and has been somewhat reliant on min.io is surprised. Personally the moment I saw the "maintenance" mode, I rushed to switch to garage. I have a few features I need to pack in a PR ready but I haven't had time to get to that. I should probably prioritize that.
We moved to garage because minio let us down.
This is becoming a predictable pattern in infrastructure tooling: build community on open source, get adoption, then pivot to closed source once you need revenue. Elastic, Redis, Terraform, now MinIO.
The frustrating part isn't the business decision itself. It's that every pivot creates a massive migration burden on teams who bet on the "open" part. When your object storage layer suddenly needs replacing, that's not a weekend project. You're looking at weeks of testing, data migration, updating every service that touches S3-compatible APIs, and hoping nothing breaks in production.
For anyone evaluating infrastructure dependencies right now: the license matters, but the funding model matters more. Single-vendor open source projects backed by VC are essentially on a countdown timer. Either they find a sustainable model that doesn't require closing the source, or they eventually pull the rug.
Community-governed projects under foundations (Ceph under Linux Foundation, for example) tend to be more durable even if they're harder to set up initially. The operational complexity of Ceph vs MinIO was always the tradeoff - but at least you're not going to wake up one morning to a "THIS REPOSITORY IS NO LONGER MAINTAINED" commit.
I think the landscape has changed with those hyperscalers outcompeting open-source projects with alternative profit avenues for the money available in the market.
From my experience, Ceph works well, but requires a lot more hardware and dedicated cluster monitoring versus something like more simple like Minio; in my eyes, they have a somewhat different target audience. I can throw Minio into some customer environments as a convenient add-on, which I don't think I could do with Ceph.
Hopefully one of the open-source alternatives to Minio will step in and fill that "lighter" object storage gap.
I guess we need a new type of Open Source license. One that is very permissive except if you are a company with a much larger revenue than the company funding the open source project, then you have to pay.
While I loath the moves to closed source you also can't fault them the hyperscalers just outcompete them with their own software.
Server Side Public License? Since it demands any company offering the project as a paid product/service to also open source the related infrastructure, the bigger companies end up creating a maintained fork with a more permissive license. See ElasticSearch -> OpenSearch, Redis -> Valkey
you won't get VC funding with this license which is the whole point of even starting a business in the wider area
That would be interesting to figure out. Say you are single guy in some cheaper cost of living region. And then some SV startup got say million in funding. Surely that startup should give at least couple thousand to your sole proprietorship if they use your stuff? Now how you figure out these thresholds get complex.
I would say what we need is more of a push for software to become GPLed or AGPLed, so that it (mostly) can't be closed up in a 'betrayal' of the FOSS community around a project.
Well, anyone using the product of an open source project is free to fork it and then take on the maintenance. Or organize multiple users to handle the maintenance.
I don't expect free shit forever.
ai
So far for me garage seems to work quite well as an alternative although it does lack some of the features of minio.
Any good alternatives for local development?
seaweedfs: `weed server -s3` is enough to spin up a server locally
garaged:
image: dxflrs/garage:v2.2.0
ports:
- "3900:3900"
- "3901:3901"
- "3902:3902"
- "3903:3903"
volumes:
- /opt/garage/garage.toml:/etc/garage.toml:ro
- /opt/garage/meta:/var/lib/garage/meta
- /opt/garage/data:/var/lib/garage/dataI didn't find an alternative that I liked as much as MinIO and I, unfortunately, ended up creating a my own. It includes just the most basic features and cannot be compared to the larger projects, but is simple and it is efficient.
Go for Garage, you can check the docker-compose and the "setup" crate of this project https://github.com/beep-industries/content. There are a few tricks to make it work locally so it generates an API key and bucket declaratively but in the end it does the job
versitygw is the simplest "just expose some S3-compatible API on top of some local folder"
OS's file system? Implementation cost has been significantly decreased these day. We can just prompt 'use S3 instead of local file system' if we need to use a S3 like service.
RustFS is dead simple to setup.
It has unfortunately also had a fair bit of drama already for a pretty young project
Is there not a community fork? Even as is, is it still recommended for use?
I started a fork during the Christmas holidays https://github.com/kypello-io/kypello , but I’ve paused it for now.
We moved to Garage. Minio let us down.