It doesn’t look too great to be honest. It’s quite verbose and gets steps out of order. It starts out with loading a font from Google Fonts. https://developer.mozilla.org/en-US/docs/Learn_web_developme...
Where MDN used to excel and for now still does, reference documentation, is also showing cracks, due to the recent changes at Mozilla. A long time contributor gave up. https://github.com/mdn/content/pull/36294
> It starts out with loading a font from Google Fonts.
Besides the steps order... My first impression is that it's taking someone who knows nothing, and conditioning them from step 1 to not even think about compromising a privacy-respecting, free and open Internet. Your First Third-Party Tracker. Your First Gratuitous Third-Party Dependency.
A couple times they hit on copyright and licensing, however. Which I approve of, but is also a corporate-friendly thing to emphasize.
Another one:
> To choose an image, go to [Google Images](https://www.google.com/imghp) and search for something suitable.
If you have to name and link a search engine for the exercise, how about not endorsing a famously privacy-invading option, but instead have the student use a more privacy-respecting one?
> To choose an image, go to Google Images
Or plug one of the many royalty-free photo sites like Unsplash or Pexels (and comes with the bonus of teaching people to consider copyrights when you publish a site).
And also those two sites you mentioned have higher quality and less clutter. A significant improvement in tool choice to what MDN suggested.
Hopefully we can fix this via contribution.
I don’t think either of those websites pay Mozilla millions of dollars. Maybe the more important lesson they’re teaching is that money makes the world go round.
The search engine they linked to happens to provide a significant portion of Mozilla's revenue.
> A long time contributor gave up. https://github.com/mdn/content/pull/36294
Reading through that issue MDN was almost definitely in the right. Also calling them a long time contributor might be a bit off, from what I can see they did one typofix and added one link: https://github.com/mdn/content/commits?author=WebReflection
Wow, and the added link was another polyfill they had written, precisely the behavior that they were (justifiably) being questioned for in the original linked thread.
I have zero outside context on that pr but judging it purely by the actual written text in the comments it seems the mdn maintainer was bring far more mature than the contributor who ended up quitting. They both mention a lot of background in the comments themselves; what information is missing which would make the contributor seem more sympathetic? As matters stand, this doesn’t appear as a loss for mdn.
100% agree. This is their contributions: https://github.com/mdn/content/commits?author=WebReflection+
Not seeing them as a major contributor.
Honestly, I think the MDN team is in the right here.
The author of the PR provided almost no explanation for the addition and left the template essentially blank. Then the team provided a detailed explanation of a very reasonable policy, to which the PR author responded with what frankly reads like a temper tantrum.
Especially after the xz incident, maintainers should be very very wary of contributors who use manipulative techniques to try to get things merged against policy, and contributors who are trying to help in good faith should be patient and understanding when they hit those barriers.
This comes in quite early:
> Your library is insecure and should not be advertised as a spec-compliant ponyfill.
I think the MDN maintainer both started the argument with the terrible first reply and escalated the argument with this accusation.
You’re right that the author should have quit replying sooner, which acknowledges himself. Good old XKCD is relevant here. https://xkcd.com/386/
Most of the issue seems to be a systemic problem at Mozilla though. The person from MDN was saying factually incorrect stuff and nobody else from MDN stepped in to help resolve the situation.
> both started the argument with the terrible first reply and escalated the argument with this accusation.
Why was the first reply terrible? They stated the policy and closed the PR. They did so professionally and calmly, and the author immediately threw a fit. Then the MDN person dug into the project more and found specific flaws and pointed them out as further evidence for why the policy (which they already cited and which should have been enough) exists.
> The person from MDN was saying factually incorrect stuff
Do you have specific examples?
> Most of the issue seems to be a systemic problem at Mozilla though.
Frankly, reading through the thread it feels like you started with this as the assumption and had cast the MDN maintainer as the bad guy before the exchange had even started. Mozilla has lots of problems and I'm the first to point them out, but this exchange doesn't demonstrate any of them—it just demonstrates how hard it is in open source to deal with entitled aspiring contributors.
Repeatedly calling it insecure and not to spec when it’s secure and it does the exact same thing unless given unusual input, and is as a ponyfill to ensure the dev is aware of its source when calling it. He also said he has relevant experience when questioned and showed an irrelevant example to support that claim.
https://github.com/ungap/raw-json/issues/6#issuecomment-2434...
Being spec compliant means being compliant with the entire spec, not just a "reasonable subset of the spec", picked by the author of the ponyfill/polyfill. And being secure only in the presence of normal inputs is... pretty meaningless afaict? Anything is secure if the inputs are "nice to the implementation". That isn't a typical bar for "it's secure".
Whether every use case that just wants to roundtrip BigInt through JSON _needs_ a fully spec compliant & generally secure solution is a different question. But at that point it's about picking a solution for a related use case, not about actually standing in for the upcoming browser feature.
> calling it insecure and not to spec when it’s secure and it does the exact same thing unless given unusual input
See, the "unless given unusual input" thing is part of where MDN was in the right and OP is in the wrong.
A polyfill/ponyfill that isn't perfectly spec-compliant can be useful, but it's reasonable for MDN to refuse to endorse it, given that their pages describe the specs. And to try to argue that it still counts as spec compliant when it doesn't handle edge cases is nonsense—the edge cases are why we have specs! If we didn't have to standardize even the edge cases an informal description of the solution would do the trick.
Hmm, I can't take it seriously defending the job that MDN did in that thread. That is just one person but he's representing the organization and nobody else steps in. I guess we're done here.
Okay! So you decided beforehand that "MDN" is in the wrong, and now you've decided to ignore factual info to the contrary.
That's not a very accurate summary. I think you're probably referring to [1] which says
> because polyfills are really hard to get right and we should treat everything as insecure and wrong by default ... I took a brief look at your code and I don't think it's spec-compliant enough to be advertised as a polyfill/ponyfill because it is prone to global pollution.
They then demonstrate that it's not compliant, which the contributor seems to think is not relevant.
Mostly, this contributor comes across as hurt that their PR wasn't immediately merged by virtue of their many years in the field. I might be missing something here though which puts the MDN team in the wrong, but...
[1] https://github.com/mdn/content/pull/36294#issuecomment-24076...
MDN does just excel at documentation. It is the ONLY place where one can learn modern web development from scratch without a hidden agenda. Everyone else is either pushing their framework or their online courses platform or their own browser ecosystem.
The behavior of the said "long time contributor" (I didn't bother checking whether they actually contributed) is very questionable.
The author of that pr is acting like a spoilt child. I would reject his contributions on principle alone.
His reaction of course, was due to Mozilla spitting in his face of course. Since his repository was not popular enough to warner attention.
His PR broke their policy of not linking to peoples own resources, it was a policy breaking pr. Its straightforward.
I just see a guy ranting for days because his contribution was rejected. Not sure what point that link is supposed to make.
The link I’m reading (the one you sent) starts with planning, not font loading?
The second link seems very irrelevant, but makes Mozilla look good. The long time contributor in that thread is giving a showcase on how not to behave in open-source. Props to Mozilla for not giving into the manipulative bully-play-victim contributor:
Comment of his, for reference: “Once again, if this was the reason for rejection I would've been way happier (it's 3LOC extra) to react to that reasoning, but I am fully sure right now even if I bring "secured" (it's a race condition in the real-world) call and apply to the ponyfill you'll find other awkward and antitrust conflicting arguments to nuke my link ... can you confirm? If yes is the answer, once again, me and you have very different meaning around working to push the Web forward (and it's sad you work for Mozilla, I don't), if no is the answer, I'll publish a fix ASAP and you should re-consider closing both PRs around this topic.
It's your call.”
It is really disappointing how much of the previous feeling of open source ethos seems to disappear every time Mozilla updates anything over the last few years.
I am not involved enough to know what kind of changes or politics are responsible, but I sure hope it reverses.
Mozilla doesn't seem to care much about creating linkrot. They've previously deleted a bunch of historical docs such as their JavaScript engine release notes with changelog information.
Failing to preserve history is likely a deliberate decision.
"He who controls the past controls the present. He who controls the present controls the future."
It's simpler: someone's idea of how URLs should look keeps changing.
Archiving is for archivists, and luckily, the web has some. If you don't want to be one, you can't tie yourself to the past.
There's definitely a middle ground between archivist and reckless bringer of breaking changes. In most cases it doesn't even require much extra work to maintain, the people in charge just don't care at all, or have the same attitude of just let someone else do it.
Kind of reminds me of the old meme with home submission laying on the Nike logo like it's a hammock, with text saying, "can't someone else do it?" [1]
[1] https://starapp.in/wp-content/uploads/2021/10/WhatsApp-DP-27...
[2] actually this one feels more accurate though the link looks more fragile: https://external-preview.redd.it/cn-YsXhRWZpJlCZKMkEJoG3ibMd...
Very ambitious of them: https://developer.mozilla.org/en-US/docs/Learn_web_developme...
and I think this is a soft joke: https://developer.mozilla.org/en-US/docs/Learn_web_developme... it currently just says TODO
Very confusing post. I took a look at their "Learn Web Development" section and I am confused as to why they link out to third parties when all the content that would be needed is pretty much already in the MDN knowledgebase on their own site.
The Neopets HTML Guide [1] remains the best beginner’s guide to Web development.
> with the aim of making MDN more accessible to non-experts and helping to take new web developers from "beginner to comfortable".
I love this. Maybe there's still hope... Been doing web development for over a decade and I'm still not "comfortable" with it >.<
It’s just a hot take from a non technical person that made there way up the totem pole. They are marketing the docs now
It's interesting when talking about the focus on supply chain safety that they've decided to only recommend core-js. From my perspective, it feels like core-js is the top candidate for the next left-pad / colors.js type author induced ecosystem failure given the author's past attitudes and financial issues.
I looked into the core js author's story and there's nothing off about him to me. He just played a role in the post-install messages being curtailed. https://docs.npmjs.com/cli/v9/commands/npm-fund?v=true As for that other thing, this puts it well: "I won't get into details - no one knows the full story - so I let you make your own opinion". https://www.izoukhai.com/blog/the-sad-story-of-denis-pushkar... I read the story (the link in that post is old) and I ended up giving him the benefit of the doubt. https://github.com/zloirock/core-js/blob/master/docs/2023-02... Also in that post is that Babel didn't fork it. That's another thing to take into account when making your own opinion.
The benefit of the doubt is a luxury. Yes of course the author deserves it but if you're operating a bank, a government, or a military, (granted not the core audience of this post) you can't afford to give the benefit of the doubt.
A lot of words and not much information density.
Is the page layout meaningfully different on some other device/browser?
I see less than 30% of my screen space being used for actual content. Dropped below 50% somewhere around the time they decided they like LLMs.
CMD-f "artificial intelligence" - 0 results.
phew!
Lies, web dev is a 7-day course on Next.js and ancillaries.
/s