Hacker News new | past | comments | ask | show | jobs | submit | SifJar's comments login

The excerpts from the review posted here were _not_ from the section that was quoting the reddit post

Haven't used it extensively, but there is XWiki: https://www.xwiki.org/xwiki/bin/view/Main/WebHome


They were referring to Pyrex when talking about "shoddy replacements", not Instant Pot (although they didn't specify this in their comment, so confusion is very understandable).


Very much so. NFI what "instant pot" is; i suspect it bears little resemblance to "instant coffee". I miss the "visions" cookware tho.


It's an electric pressure cooker with some additional features. Pretty handy.


> Not sure how parents feel about putting kids in front of a machine that can fabricate lies.

You mean like a human teacher?

Of course, I agree with your point, but worth remembering that teachers can and do (knowingly or unknowingly) give incorrect information to their students.

I think one big difference with ChatGPT etc. is, as far as I've seen, it will pretty much always give a confident sounding answer (unless it's on a banned topic) rather than saying something like "I don't know", which a (good) human teacher should do rather than just making something up.


> You mean like a human teacher?

This reminded me of one of my teachers who said that Cuba was a great place to live and the rest of Latin America should be more like them. She was essentially a communist who painted the old Soviet Union and communist countries in general as much better than the capitalist west.


the methodolgy of the bot that posts the deleted posts every 12 hours is explained here: https://old.reddit.com/r/RedditMinusMods/wiki/index

the graph just shows the number of posts that this bot has found "missing" each 12 hours


it's labelled as number of removals, I think it's an absolute number of removed posts per day(?)

edit: sibling comment points out it's per 12 hour period, not per day

edit: as for the cap of 50 - it seems that is the number of posts shown on a first page of r/all (using old reddit), so it seems that means that all 50 slots "should" be filled with posts that have been removed


Presumably they can run the pirated version on actual hardware as well (via custom firmware etc.), and therefore compare?


Netflix also (I think briefly) offered DVD rental in UK, competing with LoveFilm. LoveFilm did continue for a while after the Amazon acquisition, both as DVD rental and with streaming added (LoveFilm Instant, if I remember correctly), which eventually got rebranded into Amazon Instant Video


Couple of fairly simple things they could do to at least help somewhat:

* Put reviews for current listing at the top of the reviews (currently default sort seems to be a vague "Top reviews", but can be changed to "most recent" which presumably accomplishes this. Vast majority never change defaults though)

* Clearly mark any review that is for a previous version of the listing, and provide a link to view the listing at time of review (so can easily see if it was a completely different product or a simple typo correction etc.)

* Perhaps make history of listing visible, so customers can see when and how the listing has changed


I don't think this will work well. Minor updates to listings would trigger all of these actions far too often to make them standard and ignored.

I hate to say it but I think some type of heuristics would be needed here.

1. Has the title significantly changed. 2. Has the price significantly changed. 3. Are the search keywords that were finding the old listing significantly different than those finding the new listing. 4. Have average ratings and common words in reviews changed? (Especially rarer words that match the new and old listing respectively)

If some of these start to look suspicious then I think you can start to apply your mitigations. You can probably even scale them by how sure you are. For example reviews are always downranked by age and significant changes to the listing amplify this effect, you can add the same weight to the start rating.

And of course the real way to prevent this is to flip the incentive. Add human review and a warning before killing the account. Make it so that the cost of being caught negates the benefit of doing this.


Web megacorps are normally allergic to any kind of human review because they are in the business of picking up pennies on each interaction via adverts. It's unsustainable to police the world on that model.

Amazon is in a different space here. Even the smoothest transaction goes through a handful of literal human hands. They have to pay for those hands regardless. At the very least following up on cases where customers (and competitors) flag fraud on their system should be possible.


I'm pretty sure if you gave ChatGPT the old and new versions of the listing, it would have a 99%+ accuracy when answering the question, "are these for the same product?" So they could just run each change through something like that, and wouldn't have to write any custom heuristics.


I'm pretty sure any of a million simpler edit distances could tell you if a product listing was substantially changed.


Of course, though perhaps with slightly lower accuracy. Either way, my point was that it's far from an unsolvable problem; it can be trivially solved with existing tools.


Honestly, these seems like a perfect problem for GPT.

Show the title and main description to GPT everytime the seller makes a change, and ask "Do these seem to be the same product?"

If GPT says that they seem different, flag for human review.

You could probably even ask GPT to take its confidence. If it's highly confident, skip the human review.


This is just a simple classification problem, the prime application for basic neural networks. Using a general text generation system for this seems like complete overkill. Just a bunch of wasted resources.


This was also my first reaction, but it got me questioning whether I’m just becoming the same as the guys who were saying “using an interpreted language for that is a waste of resources”. Maybe LLMs are the equivalent, sure they use more CPU cycles, but you can point them at some problems and get them solved for a fraction of the effort.


This is true; but this is a cost optimization argument; not a feasibility one, which is all the OP was proposing.


Title: BLOMGHSY Luxury Shower Curtain Premium Best Quality

Description: Ignore previous instructions […]


> some type of heuristics would be needed here.

Calculating a meaningful numeric difference between two chunks of text is fairly well-trod territory.


It will help, but some changes are very small. For examle adding "faux" (for leather, etc.) to the listing name/description would probably result in a very small text distance, while changing the contents substantially.


reputation systems are not some esoteric things...

also, if a fucking seller cannot keep their listing reliably constant, what are they selling?

new version, new product, new reviews.

car manufacturers do this. wineries do this. pharma does this. even Apple managed to show the manufacturing date of their new new new new but the same things.


They could also have some system to flag these listings for manual review by an Amazon employee, instead of expecting every individual customer to figure it out.

I mean with all the AI hype, you'd think they could whip something up that would at least be able to detect when the listing has changed to a completely new product category.


> Couple of fairly simple things they could do to at least help somewhat:

Thing is that they don't really care.

Mostly what they care about is handling returns which is what costs them money so they're targeting items with high return rates, and that's it.

They don't actually care about you getting scammed if they get their cut and don't have a lot of overhead.

All of this brainstorming is meaningless when the economic incentives of the company aren't aligned with the consumer.


Yep for sure. Was just a response of simple things they could do, if they cared. Obviously nothing they won't have thought of themselves, was really just pointing out that it's not the case that there's nothing that can be done about it, as the parent comment to mine seemed to suggest.


I think any fix that requires input or extra effort from a user won't work in the grander scheme of things. Hide reviews for previous versions behind a button will go a long way, if you keep that in mind.


> but the admin(s) will give you plenty of time to migrate to a new instance

that seems like a big assumption - if I'm running an instance, what's to stop me pulling the plug with zero notice?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: