New Quant Modules, Discounts and Updates
As mentioned in our last post, we are going to i) have 40% discount on our annual subscriptions, and ii) hike base prices after.
Also, just in case you missed it - I am Maldives bound and will be away until 5th March, so there will be no posts till then. I am also going to extend the expiry of the discount coupon until 7th March, so I have a couple of days to process any requests once I am back.
Which brings me to what’s up once I am back: if you have noticed, we have not updated the market notes in awhile, mostly beause I was working through the quantpylib’s Github repo code and website. The simulator package has more or less been cleaned up. The market notes is envisioned to be an enclyopedic text for all things quant trading and quant dev. We have alot of quant dev material that we have released over the last 3-6 months, but not in the market notes. We are going to compile them in the market notes and release them to you, for your learning purposes.
On top of that, there is a new module I am looking to incorporate into our quantpylib library, namely a data service layer. If you have noticed, the quantpylib’s repo current contains a data_service package, but it is largely based off of code architecture I built on this post:
At that point in time of writing, alot of readers told me they liked the engineering of the SDK into a data master object so that they could integrate a powerful data query service into their applications.
I then went on to integrate this with the MongoDB service, such that the data master automatially polled and stored unseen data into our database, and otherwise retrieved data directly from the database to limit API requests to third-party.
Thinking back, I think there are alot of potential improvements. I think the biggest issue was how the data master then picked up dependencies on both the data retrieval and data storage. This then complicates usage through a form of dependency hell - by requiring you to have database connections and dataretrieval libraries for usage of the data master.
I think a better approach would be to have separate data retrieval and data storage libraries, and then a connector module in the middle for those who wish to combine both features. Additionally, each data source should be fully contained in its own wrapper - for instance a yfinance wrapper and an eod wrapper. If a particular data object can be queried from either of multiple data sources, we can then poll according to a priority preference.
The database itself can be significantly improved, and we are hoping to work on that. One of the issues is the database dependency - I would like the option for users to have their own choice - whether they are using sqlite, MySQL, arctic, no sql or so on, just with a simple database connection string, with a ORM/ODM layer to help with the abstraction.
As you can probably imagine, this is probably going to take another good few months and up to a year… I have not written the code for it, so I am sure I will run into issues as I trod along. As always, I am looking forward to the learning, and I will document and comment and update you as I go along, so that I take you along on my learning journey.
It is kind of funny - when I first started writing on Substack, there was no Russian Doll, or Genetic parser to speak of. Many revisions, iterations and feedback later, we have the quantpylib.simulator libary that many of you guys use in your own research process. In the process, many of you have developed your own quant dev abilities and grown from from novice programmers to more advanced engineers.
Thanks for letting me part of that journey.
Cheers.