Hacker News new | past | comments | ask | show | jobs | submit | asavinov's comments login

You could check out a tool for trade signal generation based on machine learning and feature engineering:

https://github.com/asavinov/intelligent-trading-bot

It trains ML models based on historic data and custom features and then uses them to generate a kind of intelligent indicator between -1 and +1. This intelligent indicator is then used to make trade decisions. Frequency is a parameter and can vary from 1 minute for crypto trading to 1 day for normal exchanges.


The sequence

    FROM Table AS t WHERE t.Condition SELECT t.col1, t.col2, ...
might be more natural than the traditional

    SELECT t.col1, t.col2, ... FROM Table AS t WHERE t.Condition
If we compare it with how loop are described in programming languages:

    Loop -> SELECT-FROM-WHERE
    Table -> Collection
    AS t -> Loop instance variable
    WHERE -> condition on instances
In Java and many other PLs, we write loops as follows:

    foreach x in Collection
        if x.field == value:
            continue
        // Do something with x, for example, return in a result set
So we first define the collection (table) we want to process elements from. Then we think about the condition they have satisfy by using the instance variable. And finally in the loop body we do whatever we want, for example, return elements which satisfy the condition.

In Python, loops also specify the collection first:

    for x in Collection:
Python list comprehension however uses the traditional order:

    [(x.col1, x.col2) for x in Collection if x.field2 == value]
Here we first specify what we want to return, then collection with condition.


From my experience with crypto currencies and the intelligent trading bot [0] I would say that transformers will not provide significant benefits when applied to the traditional statistical (numeric) forecasting problems. Such models assume that old events do not affect too much current events.

Yet, there exist problems where even old events retain their strength. An example is where we want to take into account discrete events (tokens in LLM) for predicting stock prices. These events might be explicitly defined (holidays, company announcements, important economic figures etc.) or derived from the data like technical patterns. The strength of transformers are in their ability to ignore the order of events and ignore the distances between them. More precisely, transformers can learn when it is important. In language models, this is used to generate output sequences where semantically equal tokens have completely different order than in the input sequence. Something similar can be done in time series forecasting if we accordingly define "tokens", for example, as technical patterns. Then rising stock prices can be explained (and predicted) not only because of recent numeric behavior but also because "something happened" two weeks ago.

[0] https://github.com/asavinov/intelligent-trading-bot Intelligent Trading Bot: Automatically generating signals and trading based on machine learning and feature engineering


I agree that the conventional (numeric) forecasting can hardly benefit from the newest approaches like transformers and LLMs. I made such a conclusion while working on the intelligent trading bot [0] by experimenting with many ML algorithms. Yet, there exist some cases where transformers might provide significant advantages. They could be useful where the (numeric) forecasting is augmented with discrete event analysis and where sequences of events are important. Another use case is where certain patterns are important like those detected in technical analysis. Yet, for these cases much more data is needed.

[0] https://github.com/asavinov/intelligent-trading-bot Intelligent Trading Bot: Automatically generating signals and trading based on machine learning and feature engineering


When developing an automatic trading system the following aspects are important:

- Data feeds and data ingestion. It can be a fairly independent component which collects data from different sources (might be even discussion forums) making it available to other components in some uniform format

- Feature generation. The source data is rarely used in its original form for decision making and having good (informative) features is frequently the primary factor of success. Moving averages is an example but nowadays this will hardly work

- Signal generation. Here some logic should be applied in order to emit discrete decisions and such models are heavily parameterized with thresholds.

- Real trading and order management as well as coordination of all activities.

The article sheds some light on the technological aspects and the general pipeline used to process the data and manage orders. Although it might be interesting indeed, I would expect more details about how to scale the solution and how to implement it asynchronously. Especially if it uses Go which has a special construct for that purpose - channels.

I understand that it is not the focus of the article, but having some general information about its trading logic and how to plug new and parameterize existing strategies would help. Some links at the end are quite interesting for me because I am developing an intelligent trading bot based on ML and feature engineering (https://github.com/asavinov/intelligent-trading-bot) for which such articles might be quite important


Hey, thanks. Yeah, I agree with you. That's an oversight on my end. I'll tell you here though.

I'm just using go routines and channels to talk between them and then a giant mutex for locking. That's basically it. So, as new data comes in, it builds aggregates (tick based candlesticks) as needed, this then triggers the the BUY logic loop on that new data, if something is detected, that triggers a IB API order. It is dead simple and nothing complex in here. I've had upwards of 100 positions being tracked at anyone time and seems to just work. So, I haven't messed around with complex async logic too much.

I'm actually just hard coding the parameters right into the BUY loop. This probably sounds crazy but for a small setup like this they don't change that much. So, I can run some trades, tweak things, restart, and then test some more. I imagine if you were doing that in an enterprise setting you've have some formal language and hot loading and stuff. But, for me hard coding seems to work well enough.


Also, what's your approach to risk management?

For example:

1. Do you have anything that limits the size of a single trade or position?

2. How do you measure and manage overall volatility/risk/VAR to your portfolio?

3. What kind of safeguards do you have to avoid catastrophic bugs? (For reference, see Knight capital and how a single bug brought down the entire company: https://www.henricodolfing.com/2019/06/project-failure-case-...) Given how fast your system trades, I imagine it must be difficult to visually spot these errors. (You did mention paper trading, but I wonder if you have anything else you want to mention)

Thanks so much for sharing your knowledge publicly. It's very much appreciated!


Here's some simple rules. I basically just read about these and then stole the ideas. I wish I had data to back this up and it seems to be working.

- No single bet can be more than 5% of all money. The small bet sizes are what really saves your bacon in that even if a few lose 10+% you're still fine overall. - I'm also limiting the amount of shares I bet and try to keep it in the low hundreds so that I get really fast fills. - I have something that stops everything if I lose more than 1k in a day.

I don't do anything to measure overall risk or volatility. Almost everything I'm in is highly volatile. I'm basically betting as things go up and then try to cash out. Sometimes you hit the top.

Yeah, I inspect all the trades at the end of the day, well and during the day, and try to feed anything new back into the system. This is 100% manual. But, like if a single trade loses $100 or something definitely I'm in there looking at what happened.


Thanks!

What is your 'win' rate? (Percentage of trades that make money as a fraction of overall trades)?

How many trades does your system make on average in a given day?

Do you hold positions overnight?


Around 45-50% win rate but I try and keep the runners going for as long as possible and exit out of the loser trades quickly (this basically just costs you the commission on losers but sometimes more). You can tell within a minute or two if what you expect to happen is going to happen. So, even with a 50/50 win/loss rate you can still make money. I make around 100-200 trades per day. Sell everything before market close.


Does it work synchronously or asynchronously? For example, if you integrate WebSockets then you could act as soon as you get new data. If it works synchronously then you regularly request data (say, every second) and then act.

If you process many positions and then choose top 5 candidates then do you choose one of them for buying or you can allocate resources between them (depending on some score)?


Both? I'm getting a constant stream of data via the websocket. As data comes in it gets added to the large in-memory object, and then once I have X number of trades, I'll go and build the candlestick bar. As soon as that bar builds, it triggers the BUY loop to inspect it, and see if it matches what I'm looking for. If it does, it send a buy order out to the IB API. I'm not ranking anything. I'm just looking to see if that matches and then buy. So, I have something that tracks the total money I have, and if I have any free money (and we don't already own that stock), it makes a bet. That's the logic. I wish I had some ranking logic but that's what it's doing.

That logic is happening for 5500+ stocks all in real-time which is pretty insane. But, it works really really well. Go is amazing. At market open and close there is like 60k+ events per seconds across trades and quotes. So, that loop is processing like 60k events at times, and building each stock out, and then looking to see if we should buy/sell.


> What gives you advantage is trading algo, which is always hard to find.

At the end it is necessary to make a decision whether to buy or sell (and how much), which will compete with other decisions made based on some logic. Developing such a logic (strategy) manually is of course quite difficult. I developed an intelligent trading bot which derives its trading strategy from historic data:

https://github.com/asavinov/intelligent-trading-bot

Currently it works for cryptocurrencies but can be applied to other markets:

https://t.me/intelligent_trading_signals

> I've spent months on figuring out the best parameters for trading. Ended up this working only on historical data, while in reality it was totally different.

It is a typical situation. The whole problem is to develop a strategy which works for future (unseen) data. Even backtesting algorithms should be designed in such a way that future experience (data) does not leak to the past.


For any such tool, two questions are of primary importance:

- How connections between multiple tables are represented and managed

- How derived data is described (queries, workflows etc.)

Typically such tools are aimed at simplifying data connections but normally they end up with some kind of join-like approach which requires high expertise and is error prone. So users have to deal with something they wanted to get rid of when they buy the tool. Plato is not an exception: "No SQL needed". Yet, I could not find any information on how exactly it manages connections between tables and how the unified "virtual table" is defined.

The second question is about how we can derive new data from existing data. Ideally, users would like to have something very similar to Excel because spreadsheets are indeed extremely intuitive: we define new cells as functions of other cells (which in turn might be functions of other cells). In Plato I found "virtual columns" which should be rather useful. This is somewhat similar to the column-oriented approach implemented in Prosto [0]. Yet, what is really non-trivial is how to define (derived) columns by combining data from multiple tables.

In general, the tool looks very promising and I hope that additional features and additional information will make it really popular.

[0] https://github.com/asavinov/prosto Prosto is a data processing toolkit radically changing how data is processed by heavily relying on functions and operations with functions - an alternative to map-reduce and join-groupby


Hey. Good questions.

Pkey/Fkey joins are supported in Plato by "expanding" foreign keys. You can log in, connect our sample DB, and play around with how it works. It's pretty easy. We're soon adding support for generic joins as well.

And, yes! We take a more structured, column-oriented approach than Excel. Much like Airtable. Though we are introducing an Excel-like formula language for defining derived values. It will compile to SQL on the backend, but not expose that to the user.


> I have always kept in mind is that feature engineering is almost always the key difference between success and failure

I also developed an ML-powered service heavily relying on feature engineering

https://github.com/asavinov/intelligent-trading-bot Intelligent Trading Bot

Its difference from Didact is that this intelligent trading bot is focused on trade signal generation with higher frequency of evaluation. It is more suitable for cryptocurrencies but also works for traditional stocks with daily frequencies so it could be adapted for stock picking. What I find interesting in your work is the general design of such kind of ML systems relying on feature engineering.


> Joins are what makes relational modeling interesting!

It is the central part of RM which is difficult to model using other methods and which requires high expertise in non-trivial use cases. One alternative to how multiple tables can be analyzed without joins is proposed in the concept-oriented model [1] which relies on two equal modeling constructs: sets (like RM) and functions. In particular, it is implemented in the Prosto data processing toolkit [2] and its Column-SQL language [3]. The idea is that links between tables are used instead of joins. A link is formally a function from one set to another set.

[1] Joins vs. Links or Relational Join Considered Harmful https://www.researchgate.net/publication/301764816_Joins_vs_...

[2] https://github.com/asavinov/prosto data processing toolkit radically changing how data is processed by heavily relying on functions and operations with functions - an alternative to map-reduce and join-groupby

[3] Column-SQL https://prosto.readthedocs.io/en/latest/text/column-sql.html


One idea is to use columns instead of cells. Each column has a definition in terms of other columns which might also be defined in terms of other columns. If you change value(s) in some source column then these changes will propagate through the graph of these column definitions. Some fragments of this general idea were implemented in different systems, for example, Power BI or Airtable. The main difficulty in any formalization is how to deal with columns in multiple tables.

This approach was formalized in the concept-oriented model of data which relies on two basic elements: mathematical functions and mathematical sets. In contrast, most traditional data models rely on only sets. Functions are implemented as columns. This model gets rid of joins and groupby by making data processing simpler and more intuitive.

This approach was implemented in the Prosto data processing toolkit: https://github.com/asavinov/prosto


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: