I owe a lot of the most informative programming work I’ve done to Impact.
Impact was so ahead of its time. Proud to say I was one of the 3000 license owners. One of the best purchases I’ve ever made. The only game I’ve ever really properly finished was made in Impact.
I loved that the source code was part of the license, and even modified the engine and the editor to suit my needs.
I was so inspired that I worked on my own JS game engine (instead of finishing games - ha!) for years after. I never released it, but I learned a ton in the process and made a lot of fun gamejam games with it.
I was also inspired by Impact’s native iOS support (Ejecta), but frustrated that it didn’t run on Android (at the time at least), so I fumbled my way through writing JVM bindings for V8 and implemented a subset of WebGL to run my game engine on Android without web views.[0] I made the repo for V8 bindings public and to my surprise it ended up being used in commercial software.
I won’t bore you with the startup I tried to bootstrap for selling access to private GitHub repos, which was inspired by Impact’s business model…
Anyway, it warms my heart and makes me laugh with glee to see Impact getting an update for the “modern” web with a C port!
I’d say these are strange times for the web, but I can’t remember a time when things were anything but strange. Cheers!
I might be missing something, but isn't the original Impact a JavaScript & browser based engine, thus it should run on Android in a simple Web view just fine?
Most of that repo appears to have been built in early 2013. Android WebView didn't switch to chromium until late 2013. Prior to that it was usually an older hacked up WebKit sometimes provided by the phone manufacturer with unreliable features.
I remember working on a QC app that used NFC and a webview right when I think the first Pixel phone came out, basically to track QC steps while expensive equipment went through a production line. Was a somewhat painful, but interesting experience. I'm not sure if I liked it more or less than dealing with Palm/Handspring development a few years before it.
It's funny, I actually forget some of the small, one-off things I've worked on over the years.
Ejecta was a project that implemented WebGL with JavaScript bindings -- basically everything you need for a web game without the DOM. It implemented enough of WebGL to run Impact-based games.
> Many Web games were created with Impact [the game engine from the article] and it even served as the basis for some commercial cross-platform titles like Cross Code, Eliot Quest and my own Nintendo Wii-U game XType Plus.
Cross Code is an excellent game. I knew that it used web tech and I was constantly amazed by how performant it was on the Nintendo Switch hardware. I would guess that this engine deserves some credit there!
To be fair, they modified Impact _a lot_. In some of their development streams[1] you can see a heavily extended Weltmeister (Impact's level editor).
Imho, that's fantastic! I love to see devs being able to adapt the engine for their particular game. Likewise, high_impact shouldn't be seen as a “feature-complete” game engine, but rather as a convenient starting point.
In game development this isn't true for better or worse. There is a lot of sunken cost mindset in games that we just go with what we have because we have already invested the time in it and we'll make it work by any means.
Of course you can. Not really sure why this still is tossed about. You just get a shiny turd with a lot less stink. I've made a career of taking people's turds and turning them into things they can now monetize (think old shitty video tape content prepped to be monetized with streaming).
I like this take, but you're saying something different I think, which is more along the lines of "Customers don't care about how the sausage is made".
You didn't polish a turd, you found something that could be valuable to someone and found a way to make it into a polished product, which is great.
But "You can't polish a turd" implies it's actually a turd and there's nothing valuable to be found or the necessary quality thresholds can't be met.
that's just crazy bs, starting from open source code and adding specific features needed for a project is a very common strategy, doesn't mean at all that the tool wasn't good to begin with
phoboslab was downplaying their own efforts by saying that the Cross Code team customised the Impact engine a bunch. My point was that no amount of customisation can turn a bad engine into a good one (you can't polish a turd), so phoboslab definitely deserves credit for building the base engine for Cross Code.
> no amount of customisation can turn a bad engine into a good one (you can't polish a turd)
At a risk of being off-topic and not contributing much to this particular conversation (as I doubt it's relevant to the point you're making), I'd like to note that I often actually find it preferable to "polish a turd" than to start from scratch. It's just much easier for my mind to start putting something that already exists into shape than to stop staring at a blank screen, and in turn it can save me a lot of time even if what I end up with is basically a ship of Theseus. Something something ADHD, I guess.
However, I'm perfectly aware this approach makes little sense anywhere you have to fight to justify the need to get rid of stuff that already "works" to your higher-ups ;)
That switch port took a lot of effort as someone has already commented, it is absolutely not standard impact.js.
For the anecdote, everyone wanted a Switch version, but considering the technical limitation, the team replied with "Sorry but CrossCode will be coming to Switch when Hedgehags learn to fly." [1] When they finally got to do it [2], it came with an extra quest called "A switch in attitude", featuring, you guessed it, flying hedgehags.
One thing to note, is you don’t have to feel compelled to master every combat mechanic the game throws at you (which is a lot), you can just pick your favorites. A “fox with one trick vs a thousand” and all that.
I for example basically ignore the shield for the vast majority of the game, only doing some very basic usage for some bosses, but perfect counters could very well be your favorite thing.
Huh. Never would have guessed. I am sure there were many “fancy” effects in the game, but during my playthrough, it all felt like it would have been achievable on a SNES.
I am not familiar with either Rust or the Switch, so maybe you can enlighten me: why would a Rust game be challenging on the Switch?
As long as you have a good C ffi (which Rust has), shouldn't it be quite easy?
Or is the Switch more similar to Android/iPhone where you cannot (could not?) run native code directly but have to use a platform specific language like Java or Objective C?
AFAIK some (all?) console vendors require you to use the compiler toolchain from their SDK to compile your code. So unless the console SDK includes a Rust toolchain the only way is to somehow translate the Rust code to C or C++ and compile that with the SDK compiler.
Maybe if the console SDK toolchain is Clang based it would also work to go via LLVM bitcode instead of C/C++ though.
As flohofwoe pointed out, you are limited by the console vendor SDK, the compiler toolchains, and everything required to target their OS, linker, API, ABI,...
"Thoughts on Flash" may just have saved the Web platform at its hour of greatest need, ie. creeping dominance of a single piece of software.
I believe that somewhere in there was frustration with Adobe who seemed to abandon the MacOS platform support for Windows' much larger user base, eg. Mac versions were always behind Windows versions. Perhaps Jobs also may've felt that there would be no Adobe without Apple as much as the other way around but that's speculative
Flash had numerous issues. The processing power available (especially on a PSP) is more than enough to be “good”, the problem with flash _performance_ is the power usage while achieving that perf. Even on laptops flash was a significant battery life drain whenever it was running, having it on all websites would kill battery life while browsing on a phone.
... and the post has me poking around with C again. was always an ecmascript guy with a little Lingo in there from long ago, however I have an itch for all things low-resource and close to the metal (but not assembly lang close) and golfing my way towards somethings that encourage me to dig deeper
> My decision to sell it was met with a lot of backlash but was successful enough to launch me into a self-sustained career.
As someone who is interested in eventually freeing myself from the corporate job and diving head-first into my side projects, I would love to hear more about this aspect.
For some reason the idea of trying to charge folks for the work I would normally do for the fun of it on the side is daunting to me, even though I know it could enable me to focus on doing the stuff I love full-time.
It's perfectly reasonable to get compensated for the (good) work that you do if it's solving problems for someone, which you probably already know.
So I think it's important to figure out why the idea of doing that is daunting to you. Some common "reasons" are:
- People you interact with frequently tell you to not to do it.
- Not having all the skills to execute what you want to do well.
- Asking people for help feels embarrassing and/or is annoying to them.
- The fear of having your work judged by others.
- Losing the "safety" that part of, or all of, your current income offers; especially if you have dependents.
For most people (in the context of what you said), those aren't really good reasons—it's just because it's difficult to leave the comfort zone (nothing wrong with that) because that will require at least some degree of readjustment that seem "risky". With that mindset it's actually very difficult to ever find a good time to do what you _want_ to do because all opportunities appear to be risks instead.
On a somewhat related note, I think it's important to just do what you think is fun and show it to the world, but turning that into something that you can make a living with is an entirely different challenge.
Most people don't actually get to do what they love doing for a living; even if you do, having the pressure that is the expectations of paying customers and/or maintaining revenue could take that love away from you. That's absolutely not to say that you shouldn't—it's just something that's good to be aware of before you jump into it.
Had to log in to my rarely-used HN account to mention that I had played Biolab Disaster over and over again years back but lost track of it and forgot the name. Kinda wild to find it again by sheer luck!
> high_impact is not a “library”, but rather a framework. It's an empty scaffold, that you can fill. You write your business logic inside the framework.
I normally phrase this in a much more negative way: a "framework" is simply a "library" that does not place nice with others. It's good to hear a sensible positive phrasing for once.
I studied Software Engineering, but never quite grasped the difference between a Library and a Framework until I started my first job developing with Web Objects.
It is a joy to use such a rich and well thought out Framework that really does do 99% of everything you need. Adding in your own stuff is about the easiest development I've ever done, and it just works. It was magic, and I miss using it.
My ideal framework is a library or set of cooperating libraries on the inside, with as little framework as possible.
e.g. Qt is a framework. Qt "calls you". But you can run the QPainter code without starting the Qt event loop or thinking too hard about QObjects. You should ideally be able to use the event loop without buying into signals and slots, though it won't be as ergonomic.
It's not always possible, it's not always worth it, but everything else being equal, I'd rather have no framework at all.
(For game engines I understand why a little framework is necessary - When you're talking about compiling to a weirdo platform like phones or consoles, the engine must also be involved in your build process and sometimes even libc shit. So you can't just make a Win32 exe that is a PlayStation game.)
Not to mention the part where adding compressors like this somewhat defeats the purpose of using a simple format like QOI (although at least zstd is faster than gzip, let alone 7zip).
But if we're modifying things like that, then they might as well make use of Nigel Tao's improved QOIR format, and replace the LZ4 compressor it uses with zstd. That's probably faster and likely compresses better than QOI.
So to clarify: my suggested point of comparison was replacing QOI + 7Zip of GP with QOIR + zstd. QOIR already compresses better than QOI before the LZ4 pass, and zstd compresses faster than 7zip and often better. On top of that you can put zstd in the header option when streaming data on a browser so you don't need to increase the JS bundle or whatever if the use case is the web. So that's basically a guaranteed net improvement all around.
Second of all, the speed/compression trade-off with zstd can be tuned a lot. The "half as fast as LZ4" stat is for the fastest setting, but for the proposed comparison point of 7zip a slower setting with better compression ratio is likely perfectly fine.
I like the part about memory management. Arenas are so simple.
In the (toy) web server I'm writing I initially also started with arenas. However I quickly realized that I don't actually need to grow and shrink memory at all.
Currently I'm simply allocating the memory I need up front then slice it up into pieces for every module.
When we're programming, we often pretend like we could be needing arbitrary amounts of memory, but that's not necessarily the case.
Many things actually have very clear limits. For the rest one can often define them. And if you enumerate those, you know how much memory you will need.
It is fun to think about and define those limits up front. It builds a lot of confidence and promotes a healthy frugality.
Right, I'm always annoyed when people talk about how std::vector has terrible performance (especially for games), which it certainly does if you just start with an empty vector with no memory reserved and append a few thousand items.
But it is very frequently possible to find the actual maximum you will need, allocate that upfront, and everything's great.
Coincidentally that's how I started thinking about limits and allocating everything up front. I wanted to try out arena allocation, with one arena per http request (and one for the entire lifetime).
But when I realized that if I wouldn't allocate a large enough backing array (capacity) then memory would grow by n+(n+1)+(n+2)... every time I allocate on the arena during a request.
Now you're right that this is not necessarily a problem. But this is a side project that I just write in order to learn and explore stuff.
So I thought I would need to figure out a static capacity and was thinking about how to go about this. Then I realized I don't need a data structure that grows, but only a slice of a static memory region, which then means I don't need an arena (which can arbitrarily grow) in the first place.
Now I'm exploring all kinds of things that a web server does and where I would normally use a dynamically growable data structure. Like parsing JSON, form parameters, HTTP headers etc. And I can find natural or self imposed limits for all of those things.
For me, the most interesting thing about this isn't even performance. It's the mere fact that you _can_ figure out and design those limits. And the resulting code doesn't become complicated, but looks much simpler than I would have guessed. I really like the exactness of it all and how it changes the way I think about these things.
ohhh I love your use of UNION to create a polymorphic-type ENTITY data structure. Nice work and design.
I still love futzing around in C...It was the original langauge I learned and God did I struggle with it for years. Like the OP mentioned, C is awesome because its such a concise language but you can go as deep as you like with it.
Thanks for all your efforts and the writeup...the game has a throwback Commander Keen-type vibe to it and I loved that franchise for a minute back in Carmack's pre-3D days.
Fun idea, open source for maximum learning value, seemingly flawless execution, vanity-free and clear writeup—what a lovely contribution to the world. I feel privileged just seeing things like this.
One of the final nails was the infamous Chrome 45 aka the Chromepocalypse in the video ad world.
Chrome 45, in the name of performance, defaulted to only loading flash from 3rd party domains after a "click to load". This was bad for ads for obvious reasons, but it was much much worse due to an implementation detail.
In order to get the page laid out properly, Chrome loaded the flash component and then suspended it after a single frame. From an ad perspective, that was enough to trigger the impression pixel, signifying that the ad had been shown and the advertiser should be billed. However, the ad was not shown and there was no way for the ad to detect it had been suspended. Just a nightmare. We (I led the effort at BrightRoll) had to shift a ton of ad auction behavior to special case Chrome 45 and it limited what kind of ads we could serve.
That was the inflection point away from flash for ads. While ad formats like VPAID supported JS/HTML5, they didn't start getting popular until after Chrome 45 was released.
I've been out of the space for about a decade, so I can't say how things have continued to evolve, but the only consistent antidote to fraud has been a trusted third party.
I spent a lot of my time back then pulling in metrics from third parties like Nielsen (who do traditional TV viewership numbers) as well as companies who specialized in "viewability". We could include that data in our real-time auction and bidders would adjust what they were willing to pay based on the reputation of the ad opportunity. At least that's the theory. In practice there was so much indirection and reselling that it was hard to say what was true.
As far as the technical answer: it's a game of cat-and-mouse but we found some interesting metrics we could dig in on. One was that fps of a video hidden behind another element on the page would have a drastically different rendered fps than one in the foreground. That would be different between browsers (and browser versions) but the viewability vendors would find all those quirks, measure, and analyze to give us a normalized score. For a hefty fee, of course.
Percent In View = Area of the Intersection of the Ad and the Viewport / Area of the Ad
You would get the area of the ad using getBoundingClientRect, and the area of the viewport using window.innerWidth and window.innerHeight.
It was not possible to do this if the ad was within a hostile iframe (cross origin iframe) so you needed to use a third party source for this information like SafeFrame.
All of this was greatly simplified when Intersection Observer was officially supported by modern browsers.
> I imagine it has to be somewhat "bulletproof" since money is involved.
The GP comment pretty much shows how bulletproof it wasn't, it isn't, it never will be. You also have bad actors in the space that will load ads under other ads so that the same page load continues to generate impressions. The space behaves as if there is no incentive to be honest.
Flash game sites were still huge and popular after Adobe’s purchase, so clearly it did not kill it. If you mean something like that the acquisition started the death… that’s a hard position to argue against, since it’s subjective. Perhaps you’re right.
When flash started to fade for games, many developers moved over to adtech where their skills were still valued. Flash continued for a few years to power ads as well as shims around other videos that would collect data and even trigger secondary ad auctions.
Adobe had competing products mostly based on open standards. They shut down many active product lines and merged what was left into Adobe AIR, which didn't take off.
The original JavaScript engine “Impact” from 2010 is at the end of its life; the C rewrite “high_impact” is new and will (potentially) be around for as long as we have C compilers and some graphics API.
The JavaScript engine had a lot of workarounds for things that are not necessary anymore and some things that just don't work that well with modern browsers. From the top of my head:
- nearest neighbor scaling for pixel graphics wasn't possible, so images are scaled at load time pixel by pixel[1]. Resizing the canvas after the initial load wasn't possible with this. Reading pixels from an image was a total shit show too, when Apple decided to internally double the Canvas2D resolution for their “retina” devices, yet still reporting the un-doubled resolution[2].
- vendor prefixes EVERYWHERE. Remember those? Fun times. Impact had it's own mechanism to automatically resolve the canonical name[3]
- JS had no classes, so classes are implemented using some trickery[4]
- JS had no modules, so modules are implemented using some trickery[5]
- WebAudio wasn't a thing, so Impact used <Audio> which was never meant for low latency playback or multiple channels[6] and generally was extremely buggy[7]. WebAudio was supported in later Impact versions, but it's hacked in there. WebAudioContext unlocking however is not implemented correctly, because back then most browsers didn't need unlocking and there was no "official" mechanism for it (the canonical way now is ctx.resume() in a click handler). Also, browser vendors couldn't get their shit together so Impact needed to handle loading sounds in different formats. Oh wait, Apple _still_ does not fully support Vorbis or Opus 14 years later.
- WebGL wasn't a thing, so Impact used the Canvas2d API for rendering, which is _still_ magnitudes slower than WebGL.
- Touch input wasn't standardized and mobile support in general was an afterthought.
- I made some (in hindsight) weird choices like extending Number, Array and Object. Fun fact: Function.bind or Array.indexOf wasn't supported by all browsers, so Impact has polyfills for these.
- Weltmeister (the editor) is a big piece of spaghetti, because I didn't know what I was doing.
Of course all of these shortcomings are fixable. I actually have the source for “Impact2” doing all that with a completely new editor and bells and whistles. It was very close to release but I just couldn't push it over the finish line. I felt bad about this for a long time. I guess high_impact is my attempt for redemption :]
I loved Impact and paid for it back in the day, though I never ended up finishing the project I was working on. Did you ever throw the source for "Impact2" up anywhere? What's missing from it being releaseable?
I did 5 game jams this year, 4 of them in various WebAssembly languages (C++, Zig, Odin, Rust).
In the end I switched back to JS/TS, because I found way more benefit from minimizing layers of abstraction and translation (WS is set up so you are forced to interface with JS to actually do anything), more than the benefits of any language or library.
(An exception might be something like Unity, due to the excellent featureset, but the IDE has become so slow it's unbearable...)
I used to play X-Type all the time on iOS, it’s how I discovered Impact. I love web-based games but lately I’ve been tempted to write in C or C++. Did you notice dramatic gains in optimization porting Impact from JavaScript to C?
Remember I was working on an impact.js game 10 years ago. Just can't seem to find the source code! First time I hired a guy to create some graphics. Very inspiring gaming platform.
It does, but the main speedup comes from using WebGL instead of Canvas2D. Sadly, Canvas2D is still as slow as it ever was and I really wonder why.
Years back I wrote a standalone Canvas2D implementation[1] that outperforms browsers by a lot. Sure, it's missing some features (e.g. text shadows), but I can't think of any reason for browser implementations needing to be _that_ slow.
For things missing/hard in WebGL, is it performant enough to rely on the browser compositor to add a layer of text, or a layer of svg, over the WebGL?
I have some zig/wasm stuff working on canvas2D, but rendering things to bitmaps and adding them to canvas2d seems awfully slow, especially for animated svg.
Ah man, I'm still looking for a general canvas drop in replacement that would render using webgl or webgpu if supported. Closest I've found so far is pixi.js, but the rendering api is vastly different and documentation spotty, so it would take some doing to port things over. Plus no antialiasing it seems.
You don't need multithreading to get concurrent asset streaming, a completion callback or async-await-style code will work too (after all, that's how most Javascript web games load their assets "in the background"). Also, browsers typically restrict concurrent download streams to about 6 (the exact number is entirely up to the browser though) - so you can have at most 6 asset files 'in flight'. In the end you are still limited by the user's internet bandwidth of course.
None of that worked out of the box, and we also spent most of the loading time CPU bound, processing the individual assets after they arrived over the wire. That was a blocking, non-async operation.
Then the next question is why your asset formats require such heavy processing after loading. Normally you'd convert any data into custom engine formats in an offline step in the asset pipeline so that the files can be dumped directly into memory ready for the engine (and 3D API) to be used without an expensive deserialization process.
FWIW, POSIX style multithreading in WASM is supported everywhere for a while now again (it was disabled after Spectre/Meltdown) but is locked behind special COOP/COEP HTTP headers for security reasons (so you either need to be able to configure the web server to return those headers, or use a service worker to inject the headers on the client side.
Honestly I would never ever execute any code from this guy. He is the inventor/founder behind the coinhive crypto mining network. [1]
This guy made billions illegally [2], and maintained the biggest ransomware crypto coin network for years, by offering the tools and SDKs to fund dozens of cyber war involved agencies across the planet. [3]
I have no idea how he got away with it, because his name keeps appearing in lots of crypto trading companies and trade registries. (Not gonna post them, but you can google his name to find this evidence)
He even organized a doxxing campaign against brian krebs at the time called "krebsistscheisse" via his pr0gramm platform [4] [5] [6], to somehow defend the idea that abusing user's computers for personal enrichment is a legit way of making money if you donate some low percentage to cancer research with it?!?
Sorry, but I would never trust this guy's code again. You should be careful, and audit any code he writes before you execute it.
(Lots of other articles about it, and that dominic szlablewski was the guy behind coinhive, and the original owner of pr0gramm, while still doing development work for the company that owns the imageboard officially nowadays)
I don't know what your personal agenda is, but there's so much misinformation and hyperbole in your comment that I have to assume that this is personal for some reason!?
I've been meaning to write a proper post-morten about all that, now that the dust has settled. But in the meantime, just quickly:
- I did not make billions. You're off by quite a few orders of magnitude. After taxes it was well below $500k.
- Nothing I did was illegal; that's how I got away with it.
- Coinhive was not ransomware. It did not encode/hide/steal data. In fact, it did not collect any data. Coinhive was a JavaScript library that you could put on your website to mine Monero.
- I did not operate it for "years". I was responsible for Coinhive for a total of 6 month.
- I did not organize a doxing campaign. There was no doxing of Brian Krebs. I had nothing to do with the response on the image board. They were angry, because Brian Krebs doxed all the wrong people and their response was kindness: donating to cancer research. In German Krebs = cancer, hence the slogan “Krebs ist scheiße” - “cancer is shit”.
- Troy Hunt did not "snatch away" the coinhive domain. I offered it to him.
In conclusion: I was naive. I had the best intentions with Coinhive. I saw it as a privacy preserving alternative for ads.
People in the beta phase (on that image board) loved the idea to leave their browser window open for a few hours to gain access to premium features that you would have to buy otherwise. The miner was implemented on a separate page that clearly explained what's happening. The Coinhive API was expressly written with that purpose: attributing mined hashes to user IDs on your site. HN was very positive about it, too[1]
The whole thing fell apart when website owners put the miner on their page without telling users. And further, when the script kiddies installed it on websites that they did not own. I utterly failed to prevent embedding on hacked websites and educating legitimate website owners on “the right way” to use it.
I only have access to the trade volume of coinhive's wallet addresses that were publicly known at the time and what the blockchain provides as information about that. How much money RF or SK or MM made compared to you is debatable. But as you were a shareholder of the company/companies behind it, it's reasonable to assume you've got at least a fair share of their revenue.
If you want me to pull out a copy of the financial statements, I can do so. But it's against HN's guidelines so I'm asking for your permission first to disprove your statement.
> Nothing I did was illegal (...) Coinhive was not ransomware
At the time, it went quickly into being the 6th most common miner on the planet, and primarily (> 99% of the transaction volume) being used in malware.
It was well known before you created coinhive, and it was known during and after. Malpedia entries should get you started [1] [2] but I've added lots of news sources, including German media from that time frame, just for the sake of argument [3] [4] [5] [6] [7] [8]
----------
I've posted troyhunt's analysis because it demonstrates how easily this could've been prevented. A simple correlation between Referer/Domain headers or URLs and the tokens would've been enough to figure out that a threat actor from China that distributes malware very likely does not own an .edu or .gov website in the US, and neither SCADA systems.
As there was a financial benefit on your side and no damage payments to any of the affected parties, and none revoked transactions from malicious actors, I'd be right to assume the unethical motivation behind it.
> I did not organize a doxing campaign. There was no doxing of Brian Krebs.
As I know that you're still an admin on pr0gramm as the cha0s user, that's pretty much a useless archive link.
Nevertheless I don't think that you can say "There was no doxing of Brian Krebs" when you can search for "brian krebs hurensohn" on pr0gramm, still, today, with posts that have not been deleted, and still have his face with a big fat "Hurensohn" stamp on it. [9]
As I wrote in another comment, I also said that there are also nice admins on the imageboard like Gamb, and that they successfully turned around that doxxing attempt into something meaningful.
> I don't know what your personal agenda is, but there's so much misinformation and hyperbole in your comment that I have to assume that this is personal for some reason!?
This is not personal for me, at all. But I've observed what was going on and I could not be silent about the unethical things that you built in the past.
To me, doing that lost all trust and good faith in you. The damage that you caused on a global scale with your product coinhive far exceeds whatever one person's lifetime can make up for. And I think that people should know about that before they execute your code and are going to be a victim to a fraudulent coin mining scheme.
Calling this hyperbole and misinformation is kind of ridiculous, given that antivirus signatures and everything are easily discoverable with the term "coinhive". It's not like it's a secret or made up or something.
Your "portfolio page" is quite disrespectful and in line with your behaviour in this HN submission. You've made up too many blatantly obvious lies and are now stooping down to provocating a reaction, because you having nothing better to say. I don't think anyone should trust you.
> Your "portfolio page" is quite disrespectful and in line with your behaviour in this HN submission.
Care to elaborate what is "disrespectful" about my own personal website? How did I offend you, specifically?
> You've made up too many blatantly obvious lies and are now stooping down to provocating a reaction, because you having nothing better to say. I don't think anyone should trust you.
I've cited a lot of news articles, blog posts, insights, even malware databases from multiple globally known and trusted security vendors.
I think Coinhive was really cool and a fantastic idea that was ruined by rogue actors. I love the thought of mining for 20 seconds to unlock reading an article instead of getting out your credit card or even paying with crypto. Completely anonymous payment with zero overhead.
Added some more links/news because I was on the mobile before.
If you google "doxxing brian krebs pr0gramm" you will find lots of other news sources, same as for "coinhive trade volume", as it was the platform that made monero/XMR the biggest cryptojacking platform.
> If you google "doxxing brian krebs pr0gramm" you will find lots of other news sources
I found no doxxing, I read that they organized a fundraiser, donated to cancer research, and even Brian Krebs wrote that “the response from pr0gramm members has been remarkably positive overall.”[1]
If you think that images of brian krebs' face with "Hurensohn" (German for "son of a whore") on it are not doxxing, you must be living in a parallel world.
Not gonna post direct links to this, because of HN guidelines. See the VICE article about it, which still contains some of those images. [1]
What I'm saying is that there was an attempt to doxx Brian Krebs, and the users of the imageboard [2] and Gamb, one of the admins, was turning that shitstorm into a positive thing. [3]
Ok so he basically made a hidden javascript based miner and tools to distribute it.
Couldn't find anything to support the claim that he would have tried to dox Krebs. Also "maintaining biggest ransomware crypto coin network" feels like a dishonest phrasing trying to make it sound like he had something to do with ransomware. Monero was practically never used for ransomware payments back when coinhive was active, and even today Bitcoin is the most used method for ransom payments by far. Monero was simply the most profitable coin to mine with CPU.
That being said I agree that I wouldn't trust any software made by this guy. Even the hidden miner was obviously highly unethical and probably illegal.
> Monero was practically never used for ransomware payments back when coinhive was active
This is a kind of wrong assumption. While I agree that ransomware payments themselves weren't done via monero or coinhive - malware on the other hand (read as: installed viruses/trojans/programs that the owner of the machine didn't consent to) was using it primarily to mine crypto coins.
i'm actually quite curious how it would perform relative to the C version. the article shows 1000x particles, but LittleJS has demos with a couple orders of magnitude more than that at 60fps.
Not looked into the code, the correct way would be to move the particles engine into shader code, and the limit would be as much as the graphics card can take.
It appears that after all these years, not everyone has bought into shader programming mentality, which is quite understable as only proprietary APIs have good debugging tools for them.
JS engines like V8 are very good at JIT and optimization based on actual profiling. If we talk about pure CPU modeling, I suspect a good JIT will soon enough produce machine code on par with best AOT compilers. (BTW the same should apply to JVM and CLR languages, and maybe even to LuaJIT to some extent.)
Exactly. Detecting patterns that are typical for human coders and replacing them with stuff that uses the machine efficiently is what most compilers do, even for low-level languages like C. You write a typical `for` loop, the compiler recognizes it and unrolls it, and / or replaces it with a SIMD-based version with many iterations run per clock.
Yes, but the point of the joke was to make the loop longer, while keeping it somehow logical. I wish I managed to insert Purescript, Elixir, Pony and ATS somehow.
...in this specific project, the Emscripten SDK is used for the link step (while compilation to WASM is handled by the Zig compiler, both for the Zig and C sources).
The Emscripten linker enables the 'embedded Javascript' EM_JS magic used by the C headers, and it also does additional WASM optimizations via Binaryen, and creating the .html and .js shim file needed for running WASM in browsers.
It's also possible to create WASM apps running in browsers using only the Zig toolchain, but this requires solving those same problems in a different way.
yep. exactly why I dont use C anymore. the package management story is so bad/non existent, that the typical approach is to just vendor everything. no thanks.
Impact was so ahead of its time. Proud to say I was one of the 3000 license owners. One of the best purchases I’ve ever made. The only game I’ve ever really properly finished was made in Impact.
I loved that the source code was part of the license, and even modified the engine and the editor to suit my needs.
I was so inspired that I worked on my own JS game engine (instead of finishing games - ha!) for years after. I never released it, but I learned a ton in the process and made a lot of fun gamejam games with it.
I was also inspired by Impact’s native iOS support (Ejecta), but frustrated that it didn’t run on Android (at the time at least), so I fumbled my way through writing JVM bindings for V8 and implemented a subset of WebGL to run my game engine on Android without web views.[0] I made the repo for V8 bindings public and to my surprise it ended up being used in commercial software.
I won’t bore you with the startup I tried to bootstrap for selling access to private GitHub repos, which was inspired by Impact’s business model…
Anyway, it warms my heart and makes me laugh with glee to see Impact getting an update for the “modern” web with a C port!
I’d say these are strange times for the web, but I can’t remember a time when things were anything but strange. Cheers!
[0]: https://github.com/namuol/jv8