In trying to prove that consequentialism faces difficulties, my thesis supervisor gave the example of self-driving cars.
Suppose you're riding in a self-driving car. Suddenly, you wind up in a desperate situation with a minivan full of passengers. Your vehicle has two options: (1) to collide in such a way that you survive but that several people in the minivan die or (2) to collide in such a way that you die and everybody else survives. If programmed with consequentialist instincts, your car would intentionally kill you in order to save the majority.
From this, I was supposed to gather that obviously, this would be horrible and that obviously, this suggests a problem with consequentialist ethics.
Whether self-sacrificial cars are good for society, I have no idea. It isn't as obvious to me as it is to my supervisor that it's wrong. But even presuming that it is as horrible as he suggests, this is only a problem for the most naive forms of consequentialism.
There is a market for kosher meat because its believed by some to be more pious (it isn't), more humane (it isn't), more sanitary (it isn't), and healthier (it isn't) than regular meat. For all of these imaginary benefits, you have to pay more to get it. Some people - mainly the ones unconcerned with the piety factor - don't need to pay extra for the real thing but just want something with that familiar kosher taste. Supermarkets now offer "kosher style" meat. It isn't actually kosher but it tries to replicate kosher in the same way that a veggie burger tries to replicate beef.
A lot of thought experiments that try to prove consequentialism is awful really only refute "consequentialist style" theories of ethics. My supervisor's error was in confusing consequentialism ("seek the greatest good for the greatest number") with an action that seems to seek the greatest good for the greatest number but actually leads to more harm than an alternative action.
Kosher consequentialists are interested in the "actually" part of the equation. Just because sacrificing your life for five others seems like a very utilitarian thing to do, it actually isn't consequentialist at all if that action leads to horrible outcomes (which my supervisor believes it does).
If a world where self-driving cars make naive "consequentialist style" decisions to kill their owners is worse than a world where cars make other decisions, then consequentialism prefers the alternative. Kosher consequentialism favors whatever works. If you can imagine a nightmare scenario caused by consequentialist actions, then you are very probably imagining actions that are only superficially similar to consequentialism.
You could make a consequentialist justification for the car saving the minivan and you could make a consequentialist justification for the car protecting its owner. Attributing consequentialism to the option that superficially appears to serve the greater good isn't quite kosher.