

Not gonna lie, I don’t really blame the AI.
Not gonna lie, I don’t really blame the AI.
Yeah, it’s not technically impossible to stop web scrapers, but it’s difficult to have a lasting, effective solution. One easy way is to block their user-agent assuming the scraper uses an identifiable user-agent, but that can be easily circumvented. The also easy and somewhat more effective way is to block scrapers’ and caching services’ IP addresses, but that turns into a game of whack-a-mole. You could also have a paywall or login to view content and not approve a certain org, but that only will work for certain use cases, and that also is easy to circumvent. If stopping a single org’s scraping is the hill to die on, good luck.
That said, I’m all for fighting ICE, even if it’s futile. Just slowing them down and frustrating them is useful.
I was reading something similar a few months ago about how the American obesity epidemic came out of nowhere and exploded in the 20th and 21st centuries, and places with the highest antibiotic use also has a correlated obesity rate.
I wouldn’t be surprised if a lot of our modern chronic issues that seemed to come from nowhere (not that they didn’t exist before but just became much more prevalent) will be traced in some way back to effects from messing with the human microbiome.
Ah, gotcha. I didn’t go too deep into the code, just did a cursory look. I think it’s still an interesting concept.
I don’t know why this is getting downvoted. It seems like an interesting concept for certain use cases, and it looks like it’s just a tiny team.
Millennials and Gen Z make fun of boomers for being lead-brained. Gen Alpha and Beta will make fun of Millennials and Gen Z for being plastic-brained.
This is why I am dreading when my 2017 dumb TV dies. It’s really telling that dumb TVs, which should be cheaper to produce and sell, are either not available or very expensive (as in commercial displays). Really proves the point that the consumer is really the product.
This is why I believe scientists should be required to take liberal arts classes; especially related to written and spoken language.
And yes, I also think liberal arts students should be required to take some level of hard STEM classes (not watered-down “libarts-compatible” stuff, but actual physics, chemistry, biology, etc) as well.
Yes to both points! I’m eternally grateful to my high school AP English teachers for teaching me how to write and communicate.
My somewhat unpopular opinion is that we’d be better off as a society if everyone in college took “real” STEM and liberal arts classes. The STEM folks can understand the why and societal implications of what they study (as well as just communication), and the liberal arts types can learn a bit about how the world actually works in a concrete way.
Unfortunately, I’ve been continually struck by how incurious people are. I get that everyone has their interests, but that shouldn’t be to the exclusion of all other study. So, I don’t think this will happen. :/
Depends on your skills. Documentation is always useful. If you have language skills, translation of documentation or helping create language packs/translations.
That’s just off the top of my head. I’m sure if I thought about it, I could come up with more.
YES! I study AI, and this is exactly how I feel!
Side note-One of my favorite things to do is ask people what their use case for using AI is, and watch them sputter out “uh…emails and productivity and things.”
The original paper itself, for those who are interested.
Overall, this is really interesting research and a really good “first step.” I will be interested to see if this can be replicated on other models. One thing that really stood out, though, was that certain details are obfuscated because of Sonnet being proprietary. Hopefully follow-on work is done on one of the open source models to confirm the method.
One of the notable limitations is quantifying activation’s correlation to text meaning, which will make any sort of controls difficult. Sure, you can just massively increase or decrease a weight, and for some things that will be fine, but for real manual fine tuning, that will prove to be a difficulty.
I suspect this method is likely generalizable (maybe with some tweaks?), and I’d really be interested to see how this type of analysis could be done on other neural networks.
Blue or green for me. Never could find a proper teal folder.
I’m all about this. When I made my personal webpage, this is how I do it. I’m surprised it’s not more popular (at least for certain things) because it looks nice and clean, is fast, and crucially, is easy to put together. Most webpages don’t need a ton of JS to “accomplish the mission.” I get that not everything can do this, but there are soooooo many sites that can strip down to a more minimal site and have better functionality and a better experience. This is a case of less-is-more.
This is a much better article. OP’s article just shows the author’s surface understanding of how coding works and how well an LLM can actually code. There’s way more that goes into a programming task than just coding.
I see LLMs as having the potential of being almost like a super library. I can prompt GPT, Claude, etc. to write me a custom function that I copy, paste, test, scrutinize, and almost certainly change. It’s a tool that will make someone a more productive programmer. It won’t completely subsume a human’s ability to be creative and put the pieces together.
At the absolute worst over the next decade, I could see programming changing from writing and debugging code to prompting, stitching together, and debugging.
Or they just don’t try this at all. The technology seems beyond far-fetched at this point, like mad scientist far-fetched. The tech is too light on theory to begin practical testing, At this point, it’s just inhumane.
Great article! For a few years, I was always deterred from projects because they had already been done and better, so there was no reason to do it. Now, though, I just enjoy implementing things in my own janky way and learning a bit along the way.