Decipher #52
deciphered in ⏱️ 28s
⭐⭐⭐⭐⭐
https://decipher.wtf
Huh, that’s probably the best I’ve ever done on one of these…
Decipher #52
deciphered in ⏱️ 28s
⭐⭐⭐⭐⭐
https://decipher.wtf
Huh, that’s probably the best I’ve ever done on one of these…
Ah great, that could be why a bunch of my photos didn’t get metadata. I’ll look into that, thanks for the tip.
Ooh, might look into that instead, actually. I always love a reason to write myself a little tool, but dealing with Google’s bull makes it much less appealing to me when existing tools can do it for me.
Just gone through this whole process myself. My god does it suck. Another thing you’ll want to be aware of around Takeout with Google Photos is that the photo metadata isn’t attached as EXIF like with a normal service, but rather it’s given as an accompanying JSON file for each image file. I’m using Memories for Nextcloud, and it has a tool that can restore the EXIF metadata using those files, but it’s not exact and now I have about 1.5k images tagged as being from this year when they’re really from 2018 or before. I’m looking at writing my own tool to restore some of this metadata but it’s going to be a right pain in the ass.
I feel like you’re viewing this from the wrong angle, or at very least we’re viewing it from different angles. You seem to be doing a binary classification (Is this plant edible) rather than a group classification (what plant is this?) where edibility is an attribute of the plant to be returned to the user (yes; no; when green; only the roots; etc.) - the latter is the approach most of these apps take, classify the image into a species (or list of potential species) then give the user details such as identifying features, common growing areas, edibility, and lookalikes. You’re right about softmax, it’s been a couple of years since I’ve done the programming side of this so my terminology is a bit rusty.
This is blatantly false. Classification tasks like this all have a level of certainty for each possible category - it’s just up to the person writing the software to interpret those levels of certainty in a way that’s useful to the user. Whether this is saying “I don’t know” when the certainties are too spread out, or providing a list of options like other people in this thread have said their apps do. The problem is that “100% certainty” comes off well with the general public, so there’s a financial incentive to make the system seem more certain than it is by using a layer (from memory it’s called Softmax?) that will return only the category with the highest degree of certainty.
Damn dude, they got it to a T. This whole page is a discovery for me, gonna have to go through and listen to all of this.
Legit, I’ll take this over the undocumented spaghetti I too often see written by “professionals”.
Paul, John, George, and Ringo were 4 friends, and they were all Beatles, so I think they might have skewed this data a little?
2fer on this one - got a fun new keyboard and discovered Obtainium, which I’m quite excited about!
Visual Studio and VS Code are two separate products, I’m afraid. Visual Studio is a .NET IDE and build tool, as opposed to VS Code which is essentially an extensible text editor.
Edit: also the screenshot looks like it might be from Slack?