lowki, my blockchain analytics and HFT platform, is going to be going through a bit of a pivot. let’s talk about why.

when I first started developing Lowki, I wanted the focus to primarily be blockchain investigations, fraud analysis, trading pattern analysis, etc. High-frequency trading (HFT) was certainly part of it, but the biggest thing was that I wanted to make the blockchain as transparent as possible, and making all of the transactions viewable was only part of the battle. the transactions had to actually be understandable as well. people needed the ability to analyze the data at scale, see patterns in the fog, and make decisions based on what they see.

transparecy doesn’t matter if nobody can understand what they’re looking at.

I realized, that this extends to other areas as well: social media is incredibly transparent, in that the data on what’s going on is out there for most to see, yet we’re constantly surprised at the misinformation that starts and spreads on the platforms like wildfire. misinformation and bot campaigns should be analyzed at scale, even without the backend access that one would get by working at Twitter or Facebook.

asset ownership by the political class is technically transparent in the US, in that there are public documents that have to be filed by politicians as they buy or sell stocks, and their net worth and real estate assets are also public… yet how many people can say they know where to find this information or how to analyze it?

we have access to satellite data across the globe, yet the ability to analyze that imagery at scale to see things like deforestation and urban development is more or less only available to a select few corporations.

the big problem isn’t just that we can’t read the blockchain, it’s that we have access to all of the data in the world, and close to zero ability to actually use it to make decisions. relative to our access to data, we are the dumbest generation in history.

lowki - data ingestion and analysis from small to planet scale

the pivot I am undertaking now is to build a decentralized, federated platform for data ingestion, processing and analysis.

the goal is to make data ingestion and analysis more accessible for individuals, small teams, activists, journalist outlets, media, traders, researchers, security teams, families and students. Lowki will bring artificial intelligence into every data set possible. I will offer third party data sources where you want data you don’t have, and will offer ingestion pipelines for the data you do have.

does this mean it will only be capable at a smaller scale? no

if you are a large corporation that wants to be able to make intelligent decisions based on massive data sets, I don’t see any reason why you shouldn’t benefit from the Lowki platform as well.

intelligence shouldn’t be an asset only available to giant corporations.

into the technical weeds

it goes without saying that this plan is incredibly subject to change, as lowki is in such an early stage of development that it can’t even be called an alpha.

lowki web

initially, the plan was to build lowki out as a native-first application, meaning it would run outside of the browser on a low-level rendering pipeline like Raylib. I had even played around with building it in Godot, which might still happen honestly.

I decided, though, that I am risking a lot of agility in trying to develop it this way. native is fairly difficult to develop for, especially for something that nobody might use. instead, I will be focusing on an “advanced proof-of-concept” on the web via ThreeJS. once I start running up against limitations in the browser, I’ll start worrying about converting to a native application, but judging by what I’ve seen of ThreeJS’ performance, I’ll probably make a ton of headway on the web version before I ever have to worry about inherent perf limitations in the browser.

the front end doesn’t matter as much as a lot of the data pipelines on the backend anyway. I can always re-write a new front end on top of the data backend, but the pipelines I’m going to create are what matter way more for the sake of the project, so I’d rather have a good-enough front end that I can develop fast alongside the data backend than the other way around.

because of this, Lowki Native has been pushed way down the line.

agent development

alongside the data pipeline development, I’ll be working on ways to integrate agentic behavior within the platform as well. things like being able to intelligently grab data from the Solana blockchain to enrich the relationship graph, enrich social media data with replies, detect bots with LLMs and make intelligent decisions with multi-agent “councils” will add a ton of value to the project and increase data ingestion.

tuning some local models also offers some interesting potential as well, though I have done very little research on this. hypertuning some models to specialize in Solana data intelligence for example sounds like it has some real potential. creating a large dataset of pseudo-human readable (JSON, XML, etc) that contain generated summaries of what the transactions are and what they mean could really add a ton of value to a tuned agent.

this is an area I’m fairly unfamiliar with, so I’ll more or less be exploring the potential as I go.

the business plan

there is going to be a very long R&D lead before I ever even think about monetizing the platform. the tech stack I’m using means that things like authentication, payment, etc. aren’t going to be as easy as importing a couple node libraries and calling it a day like I would with a typical Next/React app. I’m also not particularly worried about monetizing Lowki since I’ll be dogfooding it for a while anyways.

eventually, though, I’ve got a few customer profiles I’d like to target: - normies: people who want to play around with a lot of data, explore the platform and maybe learn some stuff. low monthly cost, likely around $10/mo if I can keep my costs low. - traders: people specifically interested in the financial data on the blockchain or if I decide to pull normal stock market data as well. they’ll make greater use of a lot more data, so probably a higher price here. - journalists/NGOs/intel folks: more of a B2B approach here, so probably custom contracts depending on need

I’m not even really going to be trying to change up my R&D process for quite a while to worry about monetization honestly. I’m just going to be building out what needs to be built out and I’ll worry about monetizing it when that makes sense.

from a marketing perspective, I have a few strategies I’ll try out that mainly center around content marketing: - building twitter/bsky/discord bots and agents that can be used to dynamically query the dataset - blogs (like this one) and YouTube videos showing off use cases, development updates, etc. - there are a few high-value researchers I’d love to get in with to poke their brains as power users, the top one being CoffeeZilla

until next time…

I’m going to be doing at least fairly frequent devblogs on Lowki from here on out. I’ll do more frequent updates on twitter. you can also subscribe to my newsletter for updates delivered to your email inbox.