>DoNotPay also did not "hire or retain any attorneys" to help verify AI outputs or validate DoNotPay's legal claims.
Wow, that's brave. Create a wrapper around ChatGPT, call it a lawyer, and never check the output. $193k fine seems like peanuts.
Sometimes I think about where I would be in life if I had no moral or ethical qualms. I'd probably be running a company like this.
DoNotPay predates GPT by quite a bit -- it used to be pretty positively received on HN e.g. https://news.ycombinator.com/item?id=13822289
> it used to be pretty positively received on HN
I think people love the idea of DoNotPay: A magical internet machine that saves people money and fights back against evil corporations.
Before ChatGPT they were basically mad libs for finding and filling out the right form. Helpful for people who couldn't figure out how to navigate situations by themselves. There is real value in this.
However, they've also been running the same growth hacking playbooks that people disdain: False advertising, monthly subscriptions for services that most people need in a one-off manner, claiming AI features to be more reliable than they are, releasing untested products on consumers. Once you look past the headline of the company you find they're not entirely altruistic, they're just another startup playing the growth hacking game and all that comes with it.
>they're just another startup playing the growth hacking game
When is humanity going to start seeing some patches against these exploits? Is common sense still in beta?
There's altruism, running a business, and unrestrained avarice. Sometimes libertarians can be as prone to equating the first two as leftists are the latter.
> Before ChatGPT they were basically mad libs for finding and filling out the right form
Who is they? And why did you make this a left/right issue?
Yes! I used this "AI" tool to help a friend write a letter to her landlord. It was not at all "generative AI" and seemed to just paste modules together based on her answers to its questionnaire.
To your second point, it's very funny how OpenAI seems to have soured the tech crowd on tech.
The race to the bottom in the ruthless and relentless pursuit of profit is what soured us on tech, and the AI hype train is but one in a long procession.
It's love and hate relationship with AI.
All engineers I know in tech are bashing AI left and right and going home to work on AI projects in their free time.
>To your second point, it's very funny how OpenAI seems to have soured the tech crowd on tech.
In this particular case, I'm not sour because of OpenAI. I am sour because of deceptive and gross business practices highlighted in the article.
Wasn't the tech crowd getting soured on tech by all the ads eating the world?
AI helps our lives in many ways and it's a shame that the LLM era has perverted the term with the same bad smells and scamminess of the NFT era.
> To your second point, it's very funny how OpenAI seems to have soured the tech crowd on tech.
They represent an amplifier for the enshittening that was already souring the tech crown on tech.
LLM's used in this sort of way, which is exactly OpenAI's trillion dollar bet, will just make products appear to have larger capabilities while simultaneously making many capabilities far less reliable.
Most of the "win" in cases like this is for the product vendor cutting capital costs of development while inflating their marketability in the short term, at the expense of making everything they let it touch get more unpredictable and inconsistent. Optimistic/naive users jump in for the market promise of new and more dynamic features, but aren't being coached to anticipate the tradeoff.
It's the same thing we've been seeing in digital products for the last 15 years, and manufactured products for the last 40, but cranked up by an order of magnitude.
It's exhausting and disheartening.
> it used to be pretty positively received on HN
Doesn’t that call into question the decision making of folks on HN rather than being a positive view of the product?
Yep. Turns out HN is just avg people that think they're really smart because they know how to write code.
Given the scope of the topic it appealed in 2016, parking tickets in two specific cities, I can see such a petty case like that be automated. The expansion in 2017 to seeking refuge seems like it'd be a bigger hurdle. But I wouldn't be surprised if that process can be automated a lot as well.
Seems like the killing blow here was claiming it can outright replace legal advice. Wonder how much that lie made compared to the settlement.
But yes, HN in general is a lot more empathetic towards AI than what the average consensus seems to be based on surveys this year.
> It's something I personally find very bizarre, but I've definitely noticed that a lot of people have a very strong mental block about doing things on a computer, or even a browser.
It's interesting that many have expressed something similar in regards to the current LLMs, for programming for example: that even if their output isn't exactly ideal, they still lower the barrier of entry for trying to do certain things, like starting a project in Python from scratch in a stack that you aren't entirely familiar with yet.
Not sure about the history, I based my comment on this quote from the article:
>[...] DoNotPay's legal service [...] relying on an API with OpenAI's ChatGPT.
Perhaps they rolled their own chatbot then later switched to ChatGPT? Either way, they probably should have a lawyer involved at some point in the process.
Yes I think you are right about that. Someone else called it "mad libs" and that is very much what it felt like back in 2017/18.
Idk why they needed to have a lawyer involved though. Many processes in life just need an "official" sounding response: to get to the next phase, or open the gate to talk to a real human, or even to close the issue with a positive result.
Many people are not able to conjure up an "official" sounding response from nothing, so these chatbot/ChatGPTs are great ways for them to generate a response to use for their IRL need (parking ticket, letter to landlord, etc).
My understanding is that they had much more linear automation of very specific, narrow, high-frequency processes — basically form letters plus some process automation — before they got GPT and decided they could do a lot more “lawyer” things with it.
> it used to be pretty positively received on HN
HN is a fickle beast
On a long enough timeline we all (with the exception of narcissists) see our younger selves as little idiots.
> Sometimes I think about where I would be in life if I had no moral or ethical qualms. I'd probably be running a company like this.
I think about this all the time. If I didn’t have a conscience, I would be retired by now.
You might be retiring in jail, though.
Unfortunately, "if it's an app, it's legal".
[dead]
> Sometimes I think about where I would be in life if I had no moral or ethical qualms. I'd probably be running a company like this.
I invented smaller variants of deliveroo, airbnb and uber in my mind, around 2008, but I thought no, the only way to make any money would be to exploit people and break laws. honestly, what held me back was more the hassle of lawyers to make it all work. I didn't think I could stomach the effort.
Donotpay should have used donotpay to fight FTC ruling back. That would have been the ultimate outcome if they would have won.
Given how bad an average attorney is, I wonder if chatgpt would actually be an improvement.
The average chatgpt legal brief just makes up case law, which is definitely not an improvement even over the worst attorneys out there.
> Sometimes I think about where I would be in life if I had no moral or ethical qualms. I'd probably be running a company like this.
And that's why you're not a criminal. This is a criminal act and approved and implemented by criminals. Main thing is in the US it usually pays to do white collar crime as long as it's not too big or too embarassing to politicians.
Is "brave" a euphemism for stupid.
I heard it's the British way of saying "that's insane".
https://english.stackexchange.com/questions/574974/etymology...
I can think of very few situations where someone would say "wow that's brave" and not mean "wow you're an idiot"
[dead]
To prove that that DoNotPay does not work, they will use DoNotPay on themselves to defend against this case.
I'm not sure they mean "LLM based AI".
To me it looks like they automated some boilerplate legal forms and marketted it as "AI" to capitalize on the hype.
>I'm not sure they mean "LLM based AI"
The article states that they use ChatGPT.
What is morally or ethically wrong with what they did?
Maybe it's morally or ethically wrong to prosecute them and take their belongings?
I consider boldly lying about the efficacy of your product in your advertisements to be unethical.
You don't, I guess. That's fine.
Let's ask a chatbot and find out.
Crazy people take any advice from people without citing the exact legal clause.
Then you must find it very frustrating to actually receive legal advice, because it is often more complicated than that and there sometimes is no such clause!
Did they help regular people defend themselves while saving on legal costs or not?
Most of these cases wouldn't be defended at all otherwise.
>Did they help regular people defend themselves while saving on legal costs or not?
Do you get a free pass to do shitty things as long as you do some good things too?
I am totally onboard with the concept of the business, just not this particular implementation of it.
That's a little bit how the law works. If you get sentenced for a conviction, your good deeds will affect the decision. Sometimes people get off entirely based on who they are (e.g. athletes, execs, etc).
This is exactly the right question.
Did they provide value to the user? Yes, nearly any situation in life involving money can be improved with a top lawyer on retainer, but that isn't always viable or economical
This is my issue with AI. We know it spits out nonsense sometimes. Not just random questions, but even code generation.
It will no doubt improve, but somebody has to confirm that there are no errors in the output. If it has 1 error in 10,000 now, let's say it has a 3 log improvement, so now it's 1 in 1 million.
Would that be ok for legal or medical decisions? I don't think so. How about business decisions? Nope.
As long as AI is generating output with errors, it's going to have limited use.
Another Silicon Valley startup looking to get rich quick, following in the footsteps of Uber, Airbnb, DoorDash, WeWork...etc, which have all played in the legal grey areas
Wow buddy. I guess we could say the dollar amount you were willing to forego is the price of your sense of superiority.
Haha, yeah, feeling very superior over here with my... not lying to my customers. Yep!
lmao at least the should have finetuned gpt4
> if I had no moral or ethical qualms. I'd probably be running a company like this.
You mean you'd be like any other corporation or property manager or attorney? If you operate in the confines of the law that's all that matters. If you give normal people the same power to litigate as a billionaire, that's a feature, not a bug.
>If you operate in the confines of the law that's all that matters.
This is uh.. Yeah. This is what I meant by having no moral or ethical qualms.
There are things that I find immoral which are not illegal. I do not do those things, even though legally I could.
so religious zealotry? man-in-the-sky said "no do that!" ?
"This is uh.. Yeah." - can you clarify this remark please?
"In 2021, Browder reported that DoNotPay had 250K subscribers; in May 2023, Browder said that DoNotPay had “well over 200,000 subscribers”.
To date, DoNotPay has resolved over 2 million cases and offers over 200 use cases on its website. Though DoNotPay has not disclosed its revenue, it charges $36 every two months. Given this, it can be estimated that DoNotPay is generating $54 million in annual revenue, assuming that all 250K users subscribe for 1 year."[1]
$193K seems like a pittance compared to the money they're making off of this.
>$193K seems like a pittance compared to the money they're making off of this.
I don't have any special knowledge of this specific case, but it's important to note as a general principle that often the point of these fines is as the start of a process. It creates a formal legal record of actual damages and judgement, but the government doesn't see massive harm done yet nor think the business should be dead entirely. They want a modification of certain practices going forward, and the expectation is that the company will immediately comply and that's the end of it.
If instead the business simply paid the fine and flagrantly blew it off and did the exact same thing without so much as a fig leaf, round 2 would see the book thrown at them. Defiance of process and lawful orders is much easier to prove and has little to no wiggle room, regardless of the complexity that began an action originally. Same as an individual investigated for a crime who ends up with a section 1001 charge or other obstruction of justice and ends up in more trouble for that than the underlying cause of investigation.
So yes, not necessarily a huge fine. But if there weren't huge actual damages that seems appropriate too, so long as the behavior doesn't repeat (and everyone else in the industry is on notice now too).
> DoNotPay accepted no liability.
This is founder-raising-funds math (or VC looking for liquidity math). 200k subscribers might not mean paid subs and it certainly doesn't mean 1-year of paid subs. This could be $9M (a single-month of 250 paid subs) or lower.
Fair points. I hadn't considered that a trial subscription is still a subscriber.
Their point still stands though. If the output should be reviewed by a lawyer, then the penalty should be all the profits (and maybe also the wages of the CEO) to deter others from doing the same, and ensure that they don't continue in the belief that an occasional 1-2% is perfectly acceptable 'cost of doing business'.
Maybe $193k is all that the FTC felt could be attributed to the "deceptive business practices."
It's weird to think that the FTC is right about the investigation, but somehow flubbed the penalty
I think we need to start taking this sort of thing beyond money. I'm not sure if it's warranted in this case, but in general I'd like to see more shareholders going to jail for things their companies did.
If my dog bites somebody, that's on me. It should be no different with a company.
The main product actually works, this is for additional claims that were misleading. It isn't right to compare the settlement to the entire company revenue. Better to compare to the benefit gained by wrongdoing, or the amount of harm caused.
I love the quote they included in their ads, purportedly from the Los Angeles Time but "actually from a high-schooler’s opinion piece in the Los Angeles Times’ High School Insider":
> "what this robot lawyer can do is astonishingly similar—if not more—to what human lawyers do."
To be fair if legal paperwork follows a standard process with standard information, a "robot" can complete many orders of magnitude more than any human lawyer. (I'm also not a lawyer and have no idea if this line of thinking is applicable.)
Honestly most lawyers (that I know at least) just keep templates of most common documents and fill in the blanks as needed
For basic stuff this is 95% of the end product
I guarantee it’s going to be impossible to compete as a lawyer in most fields without doing most of the work with LLMs, probably within a few years.
I expect the benefits of increased efficiency will be seen as temporarily zeroed inflation for legal services (prices actually going down? LOL) and a bunch of rents, forever (more or less, from the perspective of a human lifetime) to whichever one or two companies monopolize the relevant feature sets (see also: the situation with digital access to legal documents). Lawyers will be more productive but I expect comp will stay about the same.
And I think that as someone fairly pessimistic about the whole AI thing.
And 95% of a doctor's job is just saying "your checkup looks OK Joe, just try to get some exercise and eat more fiber".
I agree that would be a valuable proposition were it to be true (no idea, IANALE). But what I found impressive was the claim that said "robot" could do it even more similar to a human's work than a human lawyer could!
This is a very sneaky ethically gray company. Their app is not only of terrible quality but also full of dark patterns. I'm convinced that any revenue they make comes from people who can't figure out how to cancel. Stay away from it.
Ironic given the app’s purpose.
The legal system has a great sense of self preservation. They will surely fight anything that possibly encroaches on their domain, especially things that give non lawyers the tools to defend themselves without feeding the machine.
Sometimes the enemy of my enemy is still my enemy. Many organizations that purport to be helping the little guy are actually just exploiting them for profit.
As is the case with this exact company " Fight Corporations Beat Bureaucracy Find Hidden Money " this is exactly and entirely the thing a exploiting company would say it does.
Official release: https://www.ftc.gov/news-events/news/press-releases/2024/09/...
'''"None of the Service’s technologies has been trained on a comprehensive and current corpus of federal and state laws, regulations, and judicial decisions or on the application of those laws to fact patterns," the FTC found'''
Wow!! That seems so simple, and literally a few weeks to do in today's ecosystem, now thoroughly testing make take a little more time, but wow, I wonder if it was evening attempting to do RAG.
DoNoPay has to pay a fine is just an irony.
Could they...employ their own services?
Haha, my thoughts precisely. The first 4 words of the title were bewildering for me for a few seconds.
> initially was advertised as "the world's first robot lawyer" with the ability to "sue anyone with the click of a button."
There is no world in which allowing that to happen is a good idea.
I understand the sentiment but to be fair this is currently happening everywhere but only the rich people have access as they are the only one who have lawyers on retainer.
Access to legal services to poor will change things. In short term, judicial system will be overwhelmed and are forced to adopt new and efficient procedures.
What a title: "DoNotPay has to pay."
Can't they ask ChatGPT to write some objection and, at the end of the prompt, put "but make it look like it was written by a lawyer" and send that to the court to waive the fine?
Please read my comment as a joke. The title really sounds funny!
The problem with today's technology is it is indistinguishable from magic. Sometimes the magic is real, sometimes it's an illusion. It's nearly impossible as a regular consumer not deeply knowledgable of the current capabilities of models to know which is which.
Lawyer Kathryn Tewson on Twitter has been calling them out for a long time https://x.com/kathryntewson/status/1838995653630083086?s=46
There’s a lot of hate for the AI marketing aspects, and that AI isn’t up to par for full lawyer replacements, but they’ve been around with a very working and usable app before way before the AI hype.
Lawyers at huge firms or companies automate the hell out of their legal actions against normal citizens and get things wrong all the time. But it’s okay if they do it because they’re part of the same cabal keeping the legal system afloat. Say what you want negatively about some dark patterns and marketing BS, they’re making legal things affordable to the every day person.
The fine seems fair for overhyped marketing claims, but I hope they keep going and improving.
1. Amusing title 2. Yea it sounds like a smug and shitty company But 3. If I get some parking ticket or some small wrong doing I am totally going to consult with GPT before I go ahead and hire a lawyer. A well trained AI could definitely do the work of most lawyers better than they could. Lawyers and judges usually just recite rules, previous cases and known loopholes. They are a human search engine and they cost quite a bit.
Ok well it seems my test for whether we really have AI yet (are there self-driving lawyers) remains unsatisfied. For me lawyers work is significantly easier to automate with some proto-AI than is software development or driving a car. So although recent AI progress is highly impressive, I'm not retiring until it takes over the lawyers.
Truly one of the worst takes on AI I've seen. What is the purpose of a lawyer? It's not to read the law, the text is free (or should be). It's to advise you, based on the law but also on the immediate, historical and sociopolitical coontext, as well as on their understanding of the characters of all the humans involved.
If that gets automated, we are in AGI territory.
> What is the purpose of a lawyer?
This is a good question. Is the point of paying the lawyer the piece of "intellectual output" (contract, specific advice, legal briefs, etc) or the fact that they stand behind it? Is a company using AI liable if the agents sell something for a pittance? Do the AI agents have intent and standing?
Your first paragraph with very small tweaks (laws of physics etc., not law) could also apply to human attendants in elevators.
The second paragraph… hard to make a call on that one.
It's not a question of how easy it is to automate it. It's about how frequent and costly the mistakes are. Lawyer AI is a high bar - plus the people watching you are human lawyers, exactly the kind of people who can make your mistakes more costly.
Even before AI, that website has been making overly optimistic claims for many years. It was never clear to me how real or effective it was. The Wikipedia article has more detail but it seems like this is the first time the government has actually called them out?
Of course this didn't stop them raising $10m from credulous investors in 2021.
In 2021 a literal discord group raised several million dollars so take that into consideration.
I think DoNotPay and find out is the vision of this company.
I am stopping myself from releasing shitty chatgpt wrappers which I see at every trade expo. I am just not doing it because I don't want to add more shit to the shitshow.
Has the chance to make their best marketing material yet.
Honestly not that surprised, the only surprising thing to me is how little of a slap on the wrist this feels like.
It felt like a shaky premise at best as far back as I can remember. Even "standard" things often have many intricacies that a person might not know to say, and it may not let them know/ask them about it.
As an example, think of all the questions TurboTax et. al ask about taxes.
Funny thing is, they're probably still ahead financially vs if they actually hired lawyers to do this on the up-and-up.
Yes, I get it they did a bad thing. But most of what they are setup to fight is people abusing the legal system so it’s not unfair to fight fire with fire surely?
As an example I parked my car to drop my daughter off at a party and paid online - but mistyped the little number for the car park and ended up paying for 3 hours parking somewhere across the country.
Naturally the private car park tries to charge me 20x the parking fee as a “fine” - which they can whistle for frankly. But they sent varying letters that sound but don’t actually say “court” or “legal action” (things like “solicitors action prior to court”
I kept sending them the same answer they kept rejecting it
Then they actually sued me in county court. Oh wow I thought I better pay. And as a court judgement is really bad on credit record (one above bankruptcy) it’s serious. But I checked the court website anyway to be extra careful. And Incoukd challenge it - actually appear before the beak and say “hey it’s not my fault”.
So I filled in the form that says “yes I will challenge it, see you in court”
That might my wife said don’t be fucking stupid they have won pay them
So I went back the next day - and guess what - they had after 9 months withdrawn their action against me, no further need to progress, cancelled
I called the court to find out WTF
They had, and do every week, mass spammed the court with hundreds of parking cases, knowing that pretty much everyone would act like my wife pay a couple of hundred quid not to risk their credit record. I mean a county court judgement and you can kiss a mortgage goodbye.
That is simple abuse - an overworked courts system, hundreds probably thousands of rubbish claims that are put simply to strong arm people to pay up with legal threats, and no genuine attempt to filter out cases with merit, or even only look at “repeat offenders”
But is it worth the time of any parliamentarian to take this on (well frankly yes it would be great backbencher cause celeb, but what do I know.
Anyhow there was a point here - there are many many legitimate companies whose ficking business model is based on legal strong arming anyone who makes a minor infraction and that’s ok, but having a scam my business model to fight the scammy business model is bad?
Yes DoNotPay could have stayed on the right side of the line - but then would frankly run out of money. I guess we can only put our Hope in the hands of our elected representatives:-)
In the US, parking and private parking lots are easily one of the scammiest businesses.
Good, AI shouldn't be anywhere near anything where accountability matters...
About time. I've been waiting on this one for a couple years now.
Maybe they can use their own AI to get out of paying it?
If you mess with the bull, you get the horns!
I prefer to tout the unfrozen caveman lawyer.
My bet is they won’t pay.
I am honestly surprised the fine is not more. We need to see more of these come out as AI is shoved dangerously into places thanks to the ability to use it with little to no technical knowledge.
Especially when you are really just shoving data into an LLM and expecting a response to do some job, you are not training it to do a specific task.
Like the home buying AI that was on HN yesterday.
The fine was about false advertising, not dangerousness. Three of the commissioners signed on to concurring statements emphasizing that they are not opposed in principle to the use of AI in law. (https://www.ftc.gov/legal-library/browse/cases-proceedings/p...) (https://www.ftc.gov/legal-library/browse/cases-proceedings/p...)
Link to related thread: https://news.ycombinator.com/item?id=41638199
If a tenant is able to file a case that costs his corporate, PE owned landlord $25k to litigate, that's a win in my book. That's the same landlord who increased the rent 20% per year for the last 5 years because "the Market", well, Blackstone, welcome to the "market" where each eviction now costs you a collateral amount for ruining a hard working persons life. Imagine that, if there were a consequence to greed?
oh the irony
[dead]
[flagged]
It's amazing to me that people think they need other people to resolve disputes, or that "law" is some kind of magic...
And yet people keep thinking so, both selling it as magic, and buying it as magic, and not once taking the time to consider what the words on the paper might mean.
> that "law" is some kind of magic...
Law is some kind of magic, though. Consider the case where I want to cast the "Lawyer" protection spell.
If I chant "I am invoking my right to remain silent. I want to contact my attorney" then the police must stop questioning me and provide me one [1]. The spell worked.
If I chant "This is how I feel, if y’all think I did it, I know that I didn’t do it so why don’t you just give me a lawyer dog ’cause this is not what’s up" then my spell is not strong enough and the police can interrogate me as they see fit [2].
And then there's the time where a wizard had to interpret a comma [3].
[1] https://www.nedbarnett.com/do-i-have-a-right-to-an-attorney-...
[2] https://uproxx.com/culture/louisiana-supreme-court-suspect-l...
[3] https://www.loweringthebar.net/2017/03/the-oxford-comma-use-...
This is precisely the madness that I'm astounded by. Most people think this is sane, normal and moral.
And, I imagine most people reading this think I'm either insane, ignorant or uneducated, or will attach some other adjectives to further alienate me. It's okay, I already feel like an alien.
I'm not looking for an argument. I don't need to convince anyone.
I was hoping for a more receptive audience. Oh well.
It's a risk/reward issue. By analogy, I am perfectly capable of filling out an IRS 1040 form, so why pay a CPA? Because the CPA knows what things can be classified as business expenses. They have first-hand knowledge of which items have passed audits and which the IRS dismissed as farfetched. You're paying for someone who has the insider knowledge to navigate a minefield of non-obvious questions.
Or for a technical analogy, I'm capable of learning any programming language you can throw at me. However, a business who wants to hire someone is going to prefer someone with experience in that language's entire ecosystem. It's not enough to know the syntax. That's the easy part. The harder part is knowing which parts to reference at a given time, which modules experienced devs would choose to solve a specific problem, etc.
Well, same here. I'm wholly capable of reading and understanding the words of a contract or a summons or a lawsuit. What I don't know is the significance of specific phrases in those things, or what I'm allowed to use as evidence on my own behalf, or which issues I might raise that a judge would dismiss as something learned in the first semester of law school. And that's why I'd pay a lawyer to address legal issues for me.
It's the same with plumbing, wiring, programing, writing, cooking, gardening, and most of the other verbs.
For the most part, everyone can do these things, but it's nice to pay someome else to do it; especially in fields where experience gives expertise and hopefully wisdom. Also, it's handy to hire a licensed practicioner in fields where government requires licensure to sell services.
Some people are really woried about making big mistakes that are expensive to clean when plumbing or wiring or lawyering. It's a legitimate thing to consider.
It's a matter of opportunity costs as well. If you want to be able to do plumbing, wiring, framing... you likely don't also have the time to learn the potentially vast amount of knowledge required to adeptly navigate the legal system.
> Some people are really woried about making big mistakes that are expensive to clean when plumbing or wiring or lawyering.
Even worse, if you lawyer wrong (or wire wrong), you might not be able to clean it up - you just do your time or perhaps die.
There's a difference, though. Law and medicine are fields with very strong gatekeeping. You also need a licence for many construction engineering roles, and in fact both stakes and accountability there are much higher than in the former two (in fact, it is a bit appalling, how little lawyers and doctors are hold accountable for stupid shit they do, if they follow the playbook), and realistically you need to learn as much if not more to be a good construction engineer (but most construction engineers are not especially good, just as most lawyers and doctors aren't very good), but you don't see so much reverence towards construction engineers, and there are much fewer artificial borders for one to learn construction engineering.
You can get very far in life by just reading the contract, even farther by googling some context, and even farther by hiring an expert in the field.
FTC overreached here: the AI was not tested to see if it is like a lawyer’s level of work. Why would anyone have to do this kind of study?
Back in the day, anyone who touted AI generated works by default exclaimed that the work was not as good as a human. That changed now but was a valid statement back then.
>AI was not tested to see if it is like a lawyer’s level of work. Why would anyone have to do this kind of study?
The service it purported to offer is licensed. We could sit around and talk shit about the threshold to be licensed, the bar association, or licensing in general. But that's all distraction.
The point is the legal system has implemented a quality standard for providing certain services. There is a new thing providing the same service in novel way. Why wouldn't the legal system expect proof of quality?
I have not seen any evidence that AI output can reach quality levels of a human.
Any non-trivial code generated by it takes more time to debug than just writing it from scratch.
Its "art" is abominably bad and repetitive.
Text generated by AI reads like corporate ad copy written by several committees of committees.
"Lovecraftian nightmare" best describes its video output.
AI voice generation sounds like soulless ripoffs of famous voice actors (the Attenborough clone is the worst) with misplaced stresses and an off-putting cadence derived from being completely unable to understand the broader context of the work it is narrating.
Its explanations on things are inferior to the first paragraph of any wikipedia article on the query topic.
A child with a pirated copy of FL-Studio can make more interesting music.
The wall being erected by AI customer service agents between a problem and an actual human who might be able to solve it is frustrating and useless.
On top of all of that the answers it confidently gives (almost always with no sources) are often extremely wrong.
Is there a secret AI product everyone is using that is actually good?
Edit: AI is however extremely good at rapidly creating an endless stream of barely-passable content designed to distract very cheaply so I expect its use by marketing and social media firms to continue its meteoric rise.
> it's not about trivial vs not trivial.
It's about how common the code is which you are asking the AI to generate and how much contextual clues it has to get the generation right.
The Dagoth Ur voice generation is shockingly good. I found a youtube channel of Dagoth Ur narrating Lovecraft stories; it's as good as human narration IMHO.
> Why would anyone have to do this kind of study?
Because the company chose to publish ads with misleading claims about the efficacy of their system.
Yeah why would any company have to test the efficacy of their product before making claims about its efficacy?
Snark aside, this is literally a quote from their marketing: "what this robot lawyer can do is astonishingly similar—if not more—to what human lawyers do."
So to claim that they “exclaimed that the work was not as good as a human” is inaccurate.