

An equally-true headline: “At last, a promising use for AI agents: debugging smart contract code.” The availability of this tool should make smart contracts more secure in the future, and cryptocurrency more reliable as a result.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit before joining the Threadiverse as well.
An equally-true headline: “At last, a promising use for AI agents: debugging smart contract code.” The availability of this tool should make smart contracts more secure in the future, and cryptocurrency more reliable as a result.
The main problem with Biosphere 2 was that it was as much an instance of performance art as it was an attempt to create a sealed biosphere. When you’re doing an experiment you should be trying to control variables, not throwing everything into a huge pot and seeing what happens.
Now make the exact same meme but substitute “AI training” for “piracy” and watch the downvotes flow in.
Sure, not disputing that. I’m more annoyed by the double standard regarding his successful decisions.
What I mean is that when Musk-owned companies have successes people are very often quick to accuse him of “just hiring smart people” or “just buying a successful company.” It’s only when those companies have failures that he gets credit for being hands-on in their design decisions.
Don’t get me wrong, I think Elon Musk is a pretty terrible person both in terms of his personality and his politics. But pretty terrible people can nevertheless be smart and make good engineering decisions. Just look at von Braun as a prime example.
Always interesting to see the view of the degree of Elon Musk’s involvement in his companies’ decisions swing depending on whether the outcome is good or bad.
They are using them, however. They’re visiting websites with them, using apps with them, and so forth.
It’s just predicting when the wars start, not when they end. They can overlap.
It costs so much to make an AAA game these days that it must earn an enormous amount of money to be profitable, which means it needs to appeal to as broad a market as possible, which means nothing niche or unusual. I think movies are having the same problem.
Perhaps be more succinct? You’re really flooding the zone here.
You have tunnel vision on this issue.
No, I’m staying focused.
Ah, so that’s what those two swellings on her chest are.
That is absolutely ridiculous. The pressure AI scraping puts on sites vastly outstrips anything people built for, as evidenced by the fact that the systems are going down.
Yes. Which is why I’m suggesting providing an approach that doesn’t require scraping the site.
It’s ironic that you’re railing against capitalism while espousing exactly the sort of scarcity mindset that capitalism is rooted in, whereas I’m the one taking the “information wants to be free” attitude that would normally be associated with anti-capitalist mindsets.
Do you know how excited I was when LLM tech was announced? Do you know how much it sucked to realize, so soon, that companies were going to do their best to use it to optimize profits?
They do that with everything. Does that mean that everything must therefore become some kind of all-or-nothing battleground wherein companies must be thwarted?
It’s not as simple as, “Oh, you say that you believe in freedom of information, but curious how you don’t want private companies to use it to make money at your expense! Guess you’re a hypocrite.”
Emphasis added. That part is where you’re in error about my view, it’s not at my expense. It doesn’t harm me any.
Tell me what you actually believe, or stop cycling back to this like it’s a damning rebuttal.
I have been.
I’m not “taking their side.” I’m just not actively trying to harm them. The world is not a zero-sum game, it’s often possible for everyone to get what they want without harming each other in the process.
Yes, I know the companies are not the same as normal patrons. I don’t care that they’re not the same as normal patrons. All I’m concerned about is that the normal patrons get access to the data. The solution I proposed does that.
The problem, as I see it, is that’s not all that you are concerned about. Your goal also includes a second aspect; you want those companies to not have access to that data. So my proposal is not acceptable because it doesn’t thwart those companies.
I’m not drawing an equivalence between companies and individual patrons, I’m just saying my goals don’t include actively obstructing those companies. If they can get what they want without interfering with what the normal patrons want, why is that a bad thing?
Bandwidth can’t, though.
Bandwidth is incredibly cheap. The problem these sites are having is not running into bandwidth limits, it’s that providing the pages requires processing to generate them. That’s why Wikipedia’s solution works - they offer all the “raw” data in a single big archive, which takes just as much bandwidth to download but way fewer server resources to process (because there’s literally no processing - it’s just a big blob of data).
Is it okay to hire a bunch of people to check out half a library’s books, then rent them to people for money?
This analogy fails because, as I said, data can be duplicated easily. Making a copy of the data doesn’t obstruct other people from also viewing the data provided you avoid the sorts of resource bottlenecks I described above.
Is your problem really about the accessibility of this data? Or is it that you just don’t want those awful for-profit companies you hate to have access to it? I really get the impression that that’s the real problem here - people hate AI companies, and so a solution that gives everyone what they want is unacceptable because the AI companies are included in “everyone.”
I don’t understand why the burden is on the victims here.
They put the website up. Load balancing, rate limiting, and such go with the turf. It’s their responsibility to make the site easy to use and hard to break. Putting up an archive of the content that the scrapers want is an easy and straightforward thing to do to accomplish this goal.
I think what’s really going on here is that your concern isn’t about ensuring that the site is up, and it’s certainly not about ensuring that the data it’s providing is readily available. It’s that there are these specific companies you don’t like and you just want to forbid them from accessing otherwise freely accessible data.
Unlike water, though, data can be duplicated easily.
That suggestion is exactly the same as what I started with when I said “IMO the ideal solution would be the one Wikimedia uses, which is to make the information available in an easily-downloadable archive file.” It just cuts out the Aaron-Schwarts-style external middleman, so it’s easier and more efficient to create the downloadable data.
Ooh, I used to mod Luanti a lot. Wonder if any of my old work is on this server. Is it just straight Mineclonia?