Hacker News new | past | comments | ask | show | jobs | submit | adilmoujahid's comments login

I've been working on a unique project that merges the domains of generative art, pixel art, and LEGO construction, and I'm thrilled to finally share it with you. The project involves using the PICO-8 engine to generate random, abstract designs, which are then transformed into tangible LEGO constructions.

Each LEGO art piece is a translation of a digital pattern into a physical form, showcasing the fascinating possibilities in this intersection of generative and LEGO art.

You can delve into the full details of the project, including the algorithm used for generating the designs and the process of building the LEGO art, in my blog post: [Link to the article]

Any feedback or suggestions for future projects are greatly appreciated!


I personally found utilitarianism and kantianism very helpful schools of philosophy to think about the implication of AI. In short, Utilitarianism promotes decisions that benefits the greatest number of people. Whereas, kantianism focuses on the idea that people should be always treated with dignity and respect. Michael Sandel's book "Justice" is a great introduction to the topic. [1] https://www.amazon.com/Justice-Whats-Right-Thing-Do/dp/03745...


It seems that the Alpnisit with green dial will back in production early 2020 with a new movement. [1]

[1] https://www.seiyajapan.com/blogs/news/the-alpinist-will-make...


Yeah, but that won't be the same Alpinist. Especially given the Prospex logo, which is a bit weird because my impression of the Prospex line is that it is mostly a diver's line, whereas the Alpinist is supposed to be a climber's/mountaineer's watch.

After 4S15 was discontinued, Alpinists based on 6R15 have been produced for many years, but they are nowhere as sought after as the 4S15 version.


Sadly it's the cream dial Sports 200.


It's not really a new movement, just an improved version of the same old...

...but for a much higher price.

I really don't appreciate how Seiko is upselling certain segments of their products. I feel the customers are being milked without getting the once excellent value in return anymore.


Is it a single model or stacking of multiple models?


There were several top contenders that did ensemble. We went with a single model (also in top few) for reasons of integration cost. I Don't actually remember which placed first for the competition.


Manufacturing output in the US has more than doubled in the last 3 decades, but the number of jobs in the sector has shrunk from 17.5M to 12.4M [1]

The main reason for this is automation.

I think the right question to ask is how to make the productivity from automation benefit more the workers?

[1] https://www.pewresearch.org/fact-tank/2017/07/25/most-americ...


Are these the funny numbers that use intel's massive profits multiplied by a moore's law value adjustment factor to hide the decline of everything else?

https://qz.com/1269172/the-epic-mistake-about-manufacturing-...


Also make sure to check out Reina Sofia museum. It's a walking distance from the Prado and it has master pieces from Picasso and Dali


I think that this a question of basic computer literacy, and not expertise in technology.


Right? If anything someone in this position needs basic familiarity to avoid social engineering, reminds me of a scene in Hackers:

Security guard answers phone: Security, uh Norm, Norm speaking.

Date: Norman? This is Mr. Eddie Vedder, from accounting. I just had a power surge here at home that wiped out a file I was working on. Listen, I'm in big trouble, do you know anything about computers?

Norm: Uhhmmm... uh gee, uh...

Dade: Right, well my BLT drive on my computer just went AWOL, and I've got this big project due tomorrow for Mr. Kawasaki, and if I don't get it in, he's gonna ask me to commit Hari Kari...

Dade proceeds to get Norm to read him the number off of a modem at the TV station


[1] provides a good answer to this question.

"The outcome becomes more predictable over time.

This is because the payoff depends on the accurate prediction of an outcome of an event. Therefore, people will put in more effort to come to the most accurate conclusion.

As a larger number of people do more market research to come to the most likely conclusion, the predicted outcome will lean more favorable to one side.

If you place a bet on a coin flip, the outcome will always be 50% heads, 50% tails. There are no external market conditions that will influence the outcome. Luck plays a major role, and this is called gambling.

But prediction markets rely on the collective wisdom held by a group of people on the probability of a future event materializing."

[1] https://cointelegraph.com/explained/prediction-markets-expla...


Can you show that predictions are true rather than just being perturbations. To me it looks like trading drives markets to respond (beyond large obvious fundamentals changes), rather than providing usable predictions.

Moreover, assume they are predictive, do they target funds to where society needs them.

Finally if people are good at prediction, using algorithms for example, then those algorithms provide the required prediction and we should use those to target resources and retain wealth in the market rather than giving out 40% of the wealth (UK) just to get some predictive power.

I think the value added by the market is vastly over-stated in the present model.


>>To me it looks like trading drives markets to respond

Respond to what? The markets are about events that have not happened. The payout is directly proportional to the predictive power of the market purchase.

>>Finally if people are good at prediction, using algorithms for example, then those algorithms provide the required prediction and we should use those to target resources and retain wealth in the market rather than giving out 40% of the wealth (UK)

How do we incentivize people to generate these predictive methods, let alone release them for public use, without a compensatory scheme like a prediction market?


Is there a plan to create a version that generate the graphs and manage all the filtering on the server side instead of having all the data in the browser?

This will be very helpful for cases that uses large datasets...

I built visualization using dc.js, and working with large datasets was the biggest pain point for me.

http://adilmoujahid.com/posts/2016/08/interactive-data-visua...


We had plans to move DataModel (manages all data ops) to serverside. We even have a half baked DataModel in Scala which we thought we would do it once we understand some usecase. But currently we have put it on hold.

We would love to know your - use case - number of data points - ops on data on serverside

You can mail us to eng@charts.com


One use case could be a data visualization similar to what I built in [1]

To build the visualization in [1], I used 3 datasets in csv format from a kaggle competition [2], and I implemented the charts using dc.js and Leaflet.js. The charts were interactive and I could managed to filter the data even in the map.

The largest dataset was 284 MB, which was still ok and didn't crash my browser.

There were 2 drawbacks to my approach: 1- All the data was in the browser. If my data was bigger (~1GB), then it would crash my browser. 2- If I deploy the visualization to a server (for example AWS), then it would make the rendering extremely slowly as it has to download all the data to the browser...

[1] http://adilmoujahid.com/posts/2016/08/interactive-data-visua...

[2] https://www.kaggle.com/c/talkingdata-mobile-user-demographic...


Just had to say thanks [1] , was one of my first reads when learning dc.js and very helpful!


Makes sense. Will keep you posted on the plans.


It seems to me like you could leverage any number of analytical engines that expose relational interfaces, rather than go to the trouble of building your own relational model. What are the goals in building first, rather than integrating?


I'm glad you asked this.

So here is the thing with our DataModel. Every time you perform an ops on DataModel it create another instance. Now performing multiple such actions create a DAG where each node is an instance of DataModel and each edge is an operation.

We have auto interactivity, which propagates data (dimensions) pulse along the network. Any node which is attached to visualiztion receives those pulses and changes the visual.

So far I have not found any relational interface which exposes this DAG graph and api to user. Hence we though of building this.

Having said that, we might use some established relational interface and do the propagation ourself.


The implementation you are discussing sounds pretty elegant. I am most familiar with Power BI from a data viz perspective, but have used most of the enterprise viz tools out there.

The thing that always struck me about Power BI (and also Qlik) is that it is very much a model-first tool. Visualization is secondary to the model, to the extent that much of the friction I see in new users has been treating it as a reporting/layout/visualization tool when, in fact, it is a data modeling tool with a visualization engine strapped on.

One of the big drawbacks with Power BI is that it has a terribly inefficient implementation for propagating filter contexts for visual interactions (this is their translation of your "auto interactivity, which propagates data (dimensions) pulse along the network"). I do not know the internal implementation, but I am relatively certain that visual interactions are ~O(N!) in the number of visual elements on a report page, based on my experience of performance scaling across a wide range of reports. Regardless, one of the best practices is to limit a Power BI report page to a small number of visualizations (recommendations of the cutoff value vary, and types of visuals can also impact this).

If I understand you correctly, you are calculating the minimum set of recalculations/re-renderings necessary, based on the data element that a user has interacted with. This should be something much closer to O(N) in the number of visuals to propagate user selections to other visuals. I am making an assumption that most visuals should interact, as typically the scope of a single report should have a high degree of intersection of dimensionality across all report elements.

I do not know of any analytics engine that exposes the sort of DAG and associated API you are discussing, either. The reason for my initial question was simply because that sort of engine is a product in and of itself. There are plenty of columnstore databases (and following other paradigms, but optimized for OLAP workloads) out there. It seems like biting off a lot to tackle both the data engine and the visualization tier at the same time.

The big reason that I ask is that this sort of approach to visualization seems to me to benefit greatly from a data model that supports transaction-level detail. The type of interactivity that you expose is extremely powerful. I have seen interactive tools hamstrung by data models that do not allow sufficient interaction. As soon as you put interactivity in front of users, in my experience, they want to do more with the data. If you are limited to datasets that can live comfortably in the browser, that seems a showstopper to me, as it will require pre-aggregation to fit most of the datasets I've seen; pre-aggregation negates many benefits of interactive data exploration.

I'll be taking a much further dive into your product either this weekend or next. I'm very interested.


You are absolutely correct the propagation for us is O(n) as the graph is directed. But the problem there is multifold. Once a node receives propagation pulse it tries to figure out the affected subset using the dimensions received as propagation pulse. This requires joining, hence a chance to build a O(mn)cartesian product. If you see https://www.charts.com/muze/examples/view/crossfiltering-wit... example the contribution bars are drawn when the first chart is dragged requires joining follower by groupBy.

Which is why performing this in browser env even for low amount of data (say 10k) is nightmare. There are ways you can address this but while in browser you hit the limit pretty soon.

We wanted the concept to be validated first hence we have build it for browser only. But would love to hear / learn / discuss with you on this before we go ahead and build the data model in server.

Another ambiguity with the interaction is visual effect of interaction. Questions like do you really want all your chart to be cross connected. A in house survey showed us there is no certainty of the answer. And what kind of visual effect should happen on interaction differs person to person and is a function of use case. Which is why we have chosen go for chosen behaviour like

``` muze.ActionModel.for(...canvases) / for all the chart in page / .enableCrossInteractivity() / allow default cross interactivity / .for(tweetsByDay, tweetsByDate) / but for first two canvas in the example / .registerPropagationBehaviourMap({ select: 'filter', brush: 'filter' }) / if selection using mouse click or brushing happens filter data / ```

we are still writing docs for this. We hope to finish all the these docs in two weeks time.


I'm happy to continue this discussion in further detail and share my experience. You can get in touch with me at the email address in my profile if you'd like.

You're hitting a very important question in your fourth paragraph about ambiguity of desired effect from interaction. I often catch myself thinking I've heard every use case and built most of them in various viz tools. But I have learned that I am always wrong when I think that. I frequently encounter people asking for new things and it is always a toss-up whether what they want is trivial and novel or impossible and obvious.

I tend to be a data-guy much more than a viz-guy, but I fully understand the value of viz for actually presenting knowledge. Like I said, I'm interested in trying out your tool more.


Out of interest, what size of dataset are you talking about? Thousands of records? Millions?


Customers I've worked with that have small datasets would typically range into the 10M order of magnitude for a primary fact, though we had smaller outliers. Additionally, it would be common to have wide dimensions that could be KBs/record, which can add up quickly.


Might I suggest giving Perspective.js a look? Supports many of the same visualizations as Muze (and some it doesn't, specifically datagrids), is user-configurable, written in WebAssembly (C++) for extreme performance, and can run trivially on the server via node.js - there is even a CLI version:

https://github.com/jpmorganchase/perspective https://github.com/jpmorganchase/perspective/tree/master/exa...


Just had a look, looks amazing.

WebAssembly is on our radar and is coming soon. But we just wanted to release a super early version of what we build so far.


I just want to chime in as this being the biggest limitation to dc.js. For these solutions to scale, there needs to be a server-side data processing option.


Hey, we started porting DataModel (where all the data ops happens) in Scala but then put it on hold.

Will figure out the effort and roadmap and then keep you updated on the plan.


How can you explain that most Europeans use US services: Amazon, Ebay, Facebook, Google, Dropbox, AirBnB, Uber?


These are not just US services. They are isolated to one specific location, where the capital of talent and finance met. There are reasons why these companies didn't emerge in Idaho or Arkansas. Europe had its own issue at the era when the current VCs emerged: the Soviets were still there, Britain had way more people beneath the poverty line, Norway hasn't found its oil yet, etc. But we are catching up.


Every single company you named targets for European markets, thus Europeans use them.


I got that, but the previous comment argues that the heterogeneity of the European market is the main reason for the difficulty of launching successful companies rather than European lawmakers. This argument doesn't explain the success of American companies in Europe.


I suppose the argument would be that those companies launched in the USA first, were successful there, and then had enough capital and resources to launch in the tougher more fragmented European markets.


That argument is probably correct.

Here in Israel, which is often considered one of the closest "startup capitals" to Silicon Valley, what almost any new company will do is just lunch in the US, since it's an easy market. Most wouldn't launch in Israel, since it's such a small market (8 million people).

A few will launch in Europe, but because of all the issues mentioned above (different laws, languages, cultures, etc), the US is a much more attractive first target.

Once they get big, they might then move to Europe.


This is a very good strategy. I guess it should be hard to execute but once you perfect it basically the small market problems are gone. Any idea whats the general way startups go about doing that ? Also how do you immigrate to Israel (:D) ?


It's usual to start a company here in Israel, then have one office (or one person, or a salesperson) in the States. So development will continue in Israel, but product/sales/management will be in the States.

That's by no means the only way, but it's a common one.

There are a few accelerators/etc who specialize in helping Israeli companies approach the US market and get exposed to Silicon Valley.

Other than that, most of the problems are kind of the same as with any startup - trying to build something people want. The biggest issue is if "things people want" is different in Israel and the States, and that's why exposure to the US market right from the start is so important.

As for immigrating to Israel - I don't really know much about the topic. If you're Jewish, it's incredibly easy, but if you're not, then I have no idea how the process looks like. Sorry.


The lack of equaly competitive European offerings is to blame. Tax, regulatory and bureaucratic overhead is to blame for the former. Most of the interesting European tech is done in universities and dies there. Americans also have a much more pragmatic attitude towards business.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: