Conf42 Internet of Things (IoT) 2024 - Online

- premiere 5PM GMT

Beyond Traditional Databases: Introducing the Type III Architecture

Video size:

Abstract

For years, databases have acted as gatekeepers to your data. In the name of performance, they typically store data in proprietary formats, making it difficult to extract for use with other tools. In this talk I will show you how Apache Parquet and Arrow can power the new generation of databases

Summary

Transcript

This transcript was autogenerated. To make changes, submit a PR.
Hi, my name is Javier. I'm a developer advocate at QuestDB, an open source fast time series database, and I'm here today to speak about databases, but I'm not going to speak specifically about QuestDB. I have some demos, but I'm here to speak about databases. about a new architecture, the type three architecture, that we've seen many databases adopting in the past few months and why this architecture is a good thing for you. But before we start, let me ask you something. When was the last time you wish your database was using a more proprietary data format? Think about that. Did you ever think, Damn, my database is broken. It's using a data format, which is so standard and so open that I can easily reuse it and share with any other applications like, I don't know, my data lake or my, machine learning pipeline, for example, I don't know about you, but I've been working with databases for longer than I care to remember and with big and fast data for the past 10 years. And I never thought having data in an open format in a form which is compatible with many systems is a bad thing. If your data is in an open format, you can easily share across different applications without duplicating data and without having to spend time moving data around, transforming data and using resources. What's not to like? On a traditional database, ingesting data is fairly simple. But if you want to extract the data from the database, not just for one specific query, return a few rows, but if you want to extract the bulk of your data, all of your data from one table, and share that data with another system. Imagine, for example, you want to train a machine learning pipeline and you want to get all the data you have in your database out so your pipeline can train from that data. That's not so easy to do. It's going to take a while to export if you have a large table. You truly have to move the data around from one server to another, and you truly need to either transform the data into another format, or bring just in the other system to make it efficient. in the end, you're going to be wasting a lot of time just extracting data. in a way, the database, It's acting as the gatekeeper to your data. It's hey, you can get in, but getting out, ah, that's going to be a bit painful. And I should know about that. As I told you earlier, I work for QuizDB, and in QuizDB, we've been doing that for the past few years. So why databases behave that way? You think about that. 20, 30 years ago, databases were designed for a very different use case. Today, on IoT, it's very common to have data, to generate data at, I don't know, a few thousand rows per second. I've seen systems even generating a few millions of rows per second on a single application. 20 years ago, that was unthinkable. 20 years ago, if an application was very successful, it would store maybe a few million records on a database. In the lifetime of the application, not in one second, not in one day, not in one year. In the lifetime of the application, you will store maybe a few millions of rows. And if you had a query that took a couple of seconds, no one will complain. The concept of real time we have today, in which you need to have milliseconds latency between ingestion and being able to query the data because maybe you have a critical process depending on that. That was not the common case 20 years ago. So databases were designed in a different way. And I don't know about you, but, in the late 90s, early 90s, I was a web developer and I heard this complaint very often. Hey! The application is slow and the database is the bottleneck and that was true The database was the bottleneck and that happened because By the end of the 90s and the early noughties the world changed We moved from a world in which applications were mostly corporate and databases were mostly corporate to a world in which, we had websites for absolutely everything. Not only that, early notice, we started getting, cell phones and mobile applications. So we have two interesting patterns here. First, data was arriving faster than ever. And from many different places, not from a single point of entry. And second, we were storing not only transactional data, but also analytical data. We started Analyzing user behavior and relational databases were not really designed for that. And it was difficult for them to deal with the speed of the data and the amount of the data. And then we saw two interesting trends on databases. First, NoSQL databases, super fast to inserting data. Super fast to query the data in exchange. Not consistency, no constraints, simple queries, no indexes, but super fast. So if you wanted just ingesting fast data, a NoSQL database would be a super good option. On the other hand, if you wanted to have analytics and you didn't want to use a corporate data warehouse, you could use the brand new analytical databases, OLAP databases. That were specialized in running complex queries across huge amounts of data. Latency was not great. It was usually in the seconds to minutes rather than milliseconds as we expect today. And they had to prepare the data in many different ways. So they were optimized for batch inserts and batch queries, but still they could analyze huge amounts of data. And of course, article databases, they got better. They started a train of separating storage and computations. So now. You could use multiple instances in parallel to query data and then aggregate, reducing latency. So still we are speaking about seconds rather than milliseconds, but they were way faster. And they also introduced the trend of the data lake, in which we have a central repository of data, in which you store the data in some open formats. Might be parquet, most commonly, could be JSON, could be CSV, basically formats that different tools can consume without having duplicate, which is great, which is what we are talking about today. And still, even with the data lake and with the separated storage and computation, these tools were mostly for batch. And one thing that was still a problem with analytical databases. It was that they were designed for immutable data because of the way they used to prepare data and because of the formats they were using, like parquet, maybe storing data in object storage, like Google Cloud Storage or Amazon S3 or Azure Blob Storage, because of that, they were not really designed for random access and random modifications, but for append only operations. And having immutable data was not ideal. This has changed now, and I will talk about that later. But, at the time, analytical databases were not really designed for real time, and were not really designed for immutable data. But, if you are working in a use case, In which you are generating a streaming data, and of course, IOT is one of the typical use cases like finance or energy data or mobility. if you are working with a use case in which you need a streaming data, ideally, you want a system, a database that can do everything. How hard can that be? it turns out that working with a streaming data, it's tricky. First, because a streaming data can get very big. If you keep a streaming data and every few milliseconds, or every few seconds, you are getting new data points from many different devices, you're going to be getting to a few billion records very quickly. And a few billion records is something that most databases use. are not going to be able to work with in a comfortable way and data never stops. So whenever you need to do any calculation, data is always, there is always more data coming. So you need some way of setting time barriers or sampling data by time or something like that. Of course, the data that you are getting when you are streaming in real time is going to, is not going to be constant. You're going to have. But of data and you're going to have some luck because you're going to have sensors in different factories across the wall and the network is going to be a slower in some places. And just because of latency data that is generated closer to the server. It's going to get faster into the server, even if it was generated later. that's going to happen. And devices are going to be running out of battery. They are going to restart. They are going to get disconnected. So data is going to be coming out of order. It's going to be coming late. And very often, it will come when you already emit some results. So you need to be able to update whatever you did. So if you're working with immutable systems, That's going to be hard and you need to have some way of working with individual points but also when data is getting old some way of aggregating data because aggregations of older data are more valuable than individual data, which are more valuable in the recent time analytics and all of this, of course, with low latency and queries that show data as fresh as possible, because very often you're going to be working with critical systems in which maybe you are getting in. Data from a sensor, and you need to emit some alert if something is wrong, so the workers, the operator can clear the area. So you cannot allow for seconds latency in some use cases. So that's tricky. And that's made a new type of database appear, which I'm going to call fast databases. So fast databases will be like a specialization of analytical databases. that are designed for real time. That's basically it. And in fast databases, a very popular type of database, of course, in IoT, is the time series database. So time series databases are databases that specialize in very fast ingestion, in very fast queries, over the recent data, but also can query data, historical data, and can use techniques like downsampling old data, maybe deleting old partitions, moving data to object storage, so queries are slower for the historical data but still available. But basically, time series databases specialize in those type of use cases. as I told you before, I work for QuestDB. QuestDB is an open source time series database. It has the Apache 2. 0 license. Which basically means you can use it for any use case you want completely for free, of course. And it's a fairly popular project. We have today almost 15, 000 stars on GitHub. over 150 contributors already. This slide is a slightly different one. Out of date, but yeah, it's a popular project. We have thousands of users happily using QuestDB. And I want to give you an overview of which are the internals of QuestDB and what is QuestDB. so you can see why we are adopting now the Type 3 architecture. Because for the past 10 years, We've been a traditional database, a fast database, but with a traditional architecture in which we were using our own format that is not proprietary because it's open source. you can go here and see how we are storing the data, but no one else is using this format. So basically we are not compatible with any other system. Until now that we are changing. So let me tell you a little bit about QuestDB first. QuestDB is the fast database. We store data in column, in columnar format. So it's very quick to retrieve the data. We have a parallel SQL engine custom made with a just in time compiler. So every time you execute a query, we can parallelize. across many CPUs and many threads in the CPUs. The data is always partitioned. You'll see a little bit about that later. We don't use indexes. We have some specialized indexes, but, we usually They score it using index sex, except for very specific use cases. And, in most cases, we are very fast without having to index data. So we have very low latency between ingestion and the data being able to be queried. So it's practically immediate. After you read the data, you can query data already. we separate ingestion and reads. So even if your database is experiencing a heavy load on queries, you can still have a predictable ingestion rate. So if you know with your current machine, you can always ingest, let's say, I don't know, 350, 000 events per second. Even if your server is experiencing heavy load of queries, You can always make sure you are going to be ingesting data at the same rate. And we have goodies like building the duplication or updates on absurd. So basically what you would expect from a modern database for time series these days. Let me show you a little bit how you can use QuickQuestDB. I'm not going to do an ingestion demo. If you want to test ingestion, you can go to the QuestDB website. Or to this repository and try on your own, but I'm going to give you a couple of things we can do in QuestDB. For example, this is a public dashboard in which we have live financial data. We are ingesting data in real time. Grafana that is sending SQL queries to QuestDb. Every, actually, I'm going to make this a bit faster. So now, every quarter of a second, we are sending queries to QuestDb. And every quarter of a second, we are versing data. So you can see it's quite responsive. We have another dashboard here, which for IoT might be more relevant, which is taxi rides in the city of New York. So you can see here every time a journey on a taxi is starting or finishing, we are plotting this data on a map. And we also have here some passengers stats and the correlation between the tip and the fare. Of the taxi ride and so on and so forth. These two dashboards are powered by public data set we have available, sorry, in our demo machine. If you come to this, machine, DemoQuestDB. io, you can play with this data yourself. This is the trading data, and this is the taxi rides, the trips data. And for the trips data, it's not a small data set. It's 1. 6 billion records. It's not huge, but it's not too bad. So we have 1. 6 billion records with a lot of columns here. And you can do things like, for example, we have here some sample queries you can test yourself. And I'm going to be asking, for example, the I can see the average distance on this data set, and I can see the average distance on this data set. It's 2. 8 miles, and it took 200 milliseconds to calculate the average distance over a 1. 6 billion data set. Of course, if I limit. this to, just one year, for example, just the year 2018, now it's way faster. It took only 19 milliseconds to calculate the average distance. And we are still reading a large amount of data. Let me see how many trips we have in 2018. It was 110 million records, not too bad. And then you can do things like not only getting the average distance, but we have extensions to do things like I want to get the average distance sampling 15 days interval, for example. Oh, and actually I'm going to put here the date so this is easier to see. So if I execute this query, I'm going to now get for each 15 days interval, you can see here, 19, the 4, 18. So every 15 days. I have a record here, and this is arbitrary, I could, I can do every two, every two months, I can go to, from years to microseconds, I can go to any arbitrary, a month I want here, 22 days, whatever. So here, as you see, 140 milliseconds to calculate the average in 22 days intervals. it's quite flexible and performant. I don't want to talk much about this. What I want to tell you is about the architecture we design and why we are changing it. So in the past few years, we've implemented a lot of things to make QuestDB super efficient. For example, a couple of years ago, We implemented something called the Parallel Write Ahead Lock that allows you, QuestDB with the Postgres protocol that we support, or the InfluxLine protocol that we support, or CSV, we ingest data in parallel and apply changes in parallel, both in the primary machine and if you have any replicas, so your data can be read as quickly as possible. We also store the data partition by time. So for each time partition, in this case, I'm partitioning by month. So for each month, I will have a directory, and inside the directory, I have a binary file with the contents for each column. So if the column has like a fixed length, like an integer or a long or a float, Then it's going to be just one file if the data type is variable size, like a string or a bar chart, we use two different files, one for the data and one for the offsets to know where each column, where each row is starting. But basically, we use our own format to store the data in a very efficient way. if you look at the contents of the database folder in QuestDB, this is what you will see. For each table, you will see several partitions, one for every type unit. Inside the partitions, you will see, multiple binary files. And we have also metadata, like the transactions and some temporary folders, with the Grata head log files. So that's basically the, the physical layout we have, in QuestDB. We also realized last year that having columnar format is very efficient for querying data, but it's actually not that efficient for ingesting data because in real life, we see most applications are sending data with whole rows. So even if you query data by columns, You send the data in row chunks. So what we do now is when we're ingesting data, rather than ingesting by column, as we were doing in the past, now we ingest data by rows, and we store data by columns, but we only have to open files once, not multiple times like before, so ingesting is a bit faster. And I'm telling you all these things, Because basically I want you to see that in the past few years, even if we were using our own format, we're doing everything possible to make the life of our users as convenient as possible. We added also multi primary ingestion. So now you can have multiple machines in which you write data and we make sure there are no conflicts. So you can get higher throughput, or you can even have a machine in one region ingesting data and a machine in another region, and they both replicate data across the cluster. So we did every kind of thing to make the life of our users better. And we got to the point in which we saw adoption growing, many users using QuestDB, and we thought, Oh, this is cool. Now we are at a point in which we already implemented everything we wanted in the engine, and we can start doing incremental changes. But then we realize that users were asking once and again for some things That we didn't have, and we realized that the design we have for the database was getting obsolete. And why was that? if you've been paying attention, in the past couple of years, there's been a lot of talk about new file formats for big data. Apache Hudi, or Apache IceBear, or Delta Lake. They have been around for a few years, but in the past two years, there is a lot of buzz about those things. So basically these formats are formats for big data that allow you to have data stored in parquet files and add table behavior to parquet. They basically allow you to have updates, deletes, increments, incremental changes. In your tables on top of a get files. So the constraints I told you before on analytical databases about not being able to update data being immutable that being removed with these formats, but more importantly, these formats are open on as more and more tools. Are adopting these formats, now, you can consume data from multiple applications without any duplication. We also saw that, more and more, users want to do machine learning on top of the data. So users use QuestDB for real time analytics. They use QuestDV for seeing trends across the historical data, but they also want to do predictive analytics. They also want to use, that science or machine learning to, learn from the data they have already in the database. And since we have a data, a data format, which was not, open, it was difficult for them to train the data because they will have to export data into CSV, as I told you before. Or read data with tools like Pandas, row after row, before they could, they could easily train that data. So when they're doing machine learning, there are two use cases. In one case, users that want to export the whole data set. So they can train their models elsewhere. And in that case, what they would prefer is a parquet file. Or a parquet, a directory with several parquet files that they can just point their models and train from there. We have another type of user on machine learning, which are users that want to run queries. maybe they want to do, aggregations or downsampling directly on the database to, to then present the data with tools like Python or R or whichever tools you are using. But they want to run the queries using the database, but they want the data to get to the client application as fast as possible. As of today, we are using the POSGES protocol. When you use the Postgres protocol, or any other traditional protocol, JDBC, whatever you are using in your database, you are sending row after row of data to the client application. The client application needs to deserialize that data into objects, and then they need to use the application. If you are doing that with a few thousand rows, you are fine. If you are doing that with a few million rows, that's way too slow and uses way too many memory. That's basically, that's the trolling we were seeing. in order to, to solve this, we wanted to do something new. And we are going to be adopting what we call the type three database, which is something we've seen also in other databases, not only QuestDB, it's a trend we are seeing. As of today, we don't know of any other database implementing fully the type three database, but we see many databases taking these ideas, these concepts, and implementing parts of that. So the first component of a type three database is distributed computing, which basically you have. You have a storage separate from computation, probably the storage might be even object store like, Amazon S3 or, Azure Blob Storage. And you have a computation separated from that. So your queries can execute on several machines and return faster. The second part, as I told you already, It's storing the data in open formats. If your data is stored in an open format that many tools can use, whenever you want to reuse that data for anything, You can skip completely the database. You can just go to the storage. Those formats also are compressed. So you're going to be, you're going to be saving a lot of money and you're going to be saving a lot of time. It also, it's also interesting to use these formats and support semi structured data. So in most cases, your data in your table is going to have some structure, the timestamp and some columns. But in many use cases. You might have some optional columns. You might have some devices of different types that sometimes they have one architecture, but sometimes they have a different one, a different schema. And for that, it's important to support semi structured data, like JSON or in Parquet, you can support this semi structured data. So it's important that your database can use those structured and semi structured data. Otherwise, you're not going to be able to model efficiently some of the data sets that you want. And the last part, for a type 3 database, the data ingress should be as fast as the data ingress. What we mean by this is that traditionally, fast databases have been, focusing on ingesting individual rows very fast, And outputting aggregated results very fast. But they were not designed to output individual rows also very fast. And that's something that needs to change. On the type 3 database, getting individual rows out of the database should be as fast as getting rows inside of the database. And for that, you can use new data formats. I told you already about Iceberg and Parquet for storing data. And I'm going to tell you about Apache Arrow. You probably have heard already of Apache Arrow because it's been quite trendy in the past few months. But if you haven't heard of Apache Arrow, it's a in memory format that allows you to share data across multiple applications without deserializing the data. So basically, with Apache Arrow, my database When you are asking me to send you data, I can create the data directly on the arrow format. The arrow format is not a row format. It's a columnar format. So I send you data down the line already in columnar format. And I send you the data in this OpenMemory format, which is compatible with many libraries already. So tools like Apache Spark, or Pandas, or Dusk, or virtually any programming language, they already have Arrow libraries. So when I send you the data in Arrow format, You can directly use the data from your programming language without having to deserialize data in memory. So you are saving time from serializing, deserializing, and you are also not having to duplicate data in memory. So it gets very fast. Get the data and being able to use it, especially if you are working with a lot of data. And the cool thing about arrow is that it gives you not only the memory format, but also a DBC, which is like JDBC. or like ODBC is a protocol to work with SQL data, but in which the wire format is arrow itself. So basically, if you have a library that can speak ADBC, you can connect to MyDatabase, you can connect to QuestDb, you can connect using the ADBC protocol. And when you query the data, you are going to be getting the rows directly in the columnar arrow format. So the client application doesn't need to do any conversions and you have zero copy memory operations, making the whole process very efficient. What this means is that now you can stream data out of your database as fast as you can stream data in your database. So in QuestDB, we're already adopting this architecture. we are closing the gap between a time series database analytical database, because now, since we are going to be producing, we are already producing data on parquet format, we can directly ingest data parquet, and we can also read data generated elsewhere, in parquet and query data using the QuestDB engine. So we are becoming analytical database. And a time series database all in one. Of course, the query engine is decoupled for storage. As I told you, we are supporting, this is still in beta, but this is going to be early next year. It's going to be available for everyone. So we support already ADBC. So you can have zero copy operations. If you prefer to use the process protocol, JDBC, as you are doing today, You can still do it, but if your client library use ADBC, like pandas, for example, or polars, you can just use ADBC to query data and everything will be fine. Much more, much more efficient. When we generate the data is stored directly in compressed parquet, which is iceberg compatible. So you can reuse that data if you want on any other system without having to pass through QuestDb. So we are not gatekeeping your data anymore. And as I told you already, if you have parquet that has been generated externally, you can point QuestDb to QuestDb. To those packet files, and you'll be able to create data directly from there. A specific thing we are doing in QuestDB, which is a bit unique, is that, as I told you earlier, we have our own binary format in order to be very efficient. So what we are doing these days is When you are storing data, the latest partition of the data, it's stored still in our binary format. The older partitions, if you are partitioning data by day, so data from yesterday and before, it's stored in parquet format. And even if you have data which is arriving late, And out of order, if you have an update on the data, we are updating the parquet files, okay? But the recent partition, if you have a partition per hour, the last hour of data, it's still stored on binary format. Why? Because, as I said before, our binary format is designed to be super efficient. When you are working with real time data, most queries are on the most recent part of the data. This means the most recent partition. Since also those queries are the most critical for query latency, what we are doing is we store the the most recent partition for each table, it's stored in our binary format. So the time to query the data, the query latency, the query freshness, is as fast as possible. We don't need to read from parquet for that data. We store data in the binary format, and all the other partitions are in parquet. When you run a select, if you, if your select is across multiple partitions, you don't have to worry if your data lives on the binary format or in parquet. Even if you have object storage, you can define, you have, you can define multi tiered storage in which the latest partition is in binary format. The last month of partitions is in, in parquet. And then you might have in parquet in your local disk. And then you might define that all the data, which is over one month, all go to object storage, for example, to S3. So you can define three tiers of data, the recent partition, the data which is in parquet in your local file system, and the data which is in object storage. And QuestDB, whenever there is a query, it will, get the data. It will fetch the data from whichever storage And we'll, execute the query, but the queries that execute in the most recent partition, they will be faster because they are using the binary storage. So we call this the first mile of the data in which, with this optimization, we can still be Super fast for query, super low latencies, but keep compatibility with the rest of the systems by storing in Parquet all the other partitions. So this is in and of itself the Type 3 architecture. As I said, in QuestDB, we are adopting it, but there are other databases that are also adopting Arrow and Parquet. So it's a trend. We see in many other databases, and this is our implementation. You can ingest data using, streaming systems like Red Panda or Kafka or Confluent, whatever you are using. You can ingest data using the Client Libraries or the Influence Line Protocol. you can also ingest data if you want from CSV. Whichever way you are ingesting the data, the parallel writers are going to be writing data To the Quest DB file formats for the latest partition to park it for all the other partitions or all the data with Ravi late. This packet file can be consumed directly by any tools like Polars or Pandas or Spark or d db. Anything you are using, you can read directly from Parque without having to, to use the query engine. But if you want to run any queries, of course you're seeing the. SQL clients with, JDBC, the POSIS protocol, ADBC. You can efficiently query data, either with Arrow or with the Piggywire, protocol or the REST API. And again, those reads are going to be in parallel. We scale computation independently of storage. to get your data as fast as possible. As a last demo, I want to show you how we are working as of today with Parquet and how you can use external tools, in this case, DuckDB, to, read data in Parquet. That we are producing from QuestDB. Let me show you my local QuestDB. So here I have a table. It has like public data from water sensors in Chicago. It has this, the measurement timestamp. The beach name, water temperature, turbidity, wave height, period, battery life, different things. And, I can just run a query on this data, and of course I can do things, as I told you earlier, I can do things like, give me, for example, the, let's say, give me, The time stamp and the average, actually the time stamp and the its name and the average temperature and average to b the T, in, let's say in one month intervals, for example, on this dataset and that for each beach name and timestamp, each beach name a month. In this case, sorry. It will give me the average temperature. So as you can see here, different month and different beach. And I have a hundred rows like this. And if I query independently, I have 120, 000 rows. This is public data from Chicago. And if I go to my, to my file storage, I can see here, actually. Oh yeah, here. I'm here on the directory from my data set. And I can see here that the data, all the partitions, are stored as parquet data, except for the, last one, which is stored, as I told you earlier, with, individual, with the binary format. So for each different column, I have, different, a different binary file. For the columns that are characters or variable size, I have two, as you can see here. And this one is a specific type, we call it symbol. It has like different metadata, but that's about it. All the, all the partitions are parquet, and the most recent partition, what we call the update partition, it's in binary format. And something I can do, I can just open DuckDB, which is a fantastic database for, batch analytics, and I can run a query just like this. I can say, I want to read from parquet, and I'm going to point to my directory and all the folders. Inside this directory, all the partitions, and I'm going to say, Hey, this is a partition folder. And it contains a lot of parquet files. So find me everything which is in this. if I don't filter anything first, I should have the whole data set. Now, a hundred twenty something thousand. And now I can run queries like a only for a specific, point in time, and I have five, 5,478 rows. If I go again back to my web console, I can run the state query. And actually here you can see the number is larger. So if I'm query in for the same date, the number here is over 5,500. And in DuckDB, it was actually 5, 400 something. Why is this? as I told you earlier, the latest partition, the active partition, is not stored in parquet. You can see here the partition corresponding to September 2023, which is the latest in this dataset. It's stored in the binary format because it's the most recent partition. So DAGDB don't have access to this. If I wanted, I could export this in parquet from QuestDB. But by default, the latest partition is not available. So basically, the difference is in this one. If I go back to my web interface, and actually I query the table partitions, and I see that the latest one is this one. It's September 2023, as we saw on disk. And now I'll run my query between this date and the latest one. Now I have 5, 478 rows, which is exactly the number of rows I have in DAGDB. as we wanted to show, now you can use QuestDB to ingest your data. Run your real time identical queries and still use, any other, system in this case that DB could be pandas, could be polars, could be anything you want to use to query the generated parquet files. And if you want to use QuestDB, I told you already is open source. You can use it for absolutely anything you want with the open source version. You can have, up to 5 million events per second on a single instance was not to like, but if you want to have enterprise, capabilities, bring your own cloud or role based access control, single sign on with after record or intra ID. enhanced security on all protocols, cold storage, multi primary ingestion. If you need any of those things, and of course, Enterprise support. We also have a QuestDB Enterprise offering. We recommend most users to start with QuestDB Open Source. See if you like it. And if you like it, contact us for Enterprise. And that's about it. I hope this was informative. I hope you understand now why the TypeScript database is a good idea. And if you want to learn more about QuestDB, I'm leaving here some links where you can learn a bit more. Thank you and have a nice day.
...

Javier Ramirez

Developer Advocate - Developer Relations Lead @ QuestDB

Javier Ramirez's LinkedIn account Javier Ramirez's twitter account



Join the community!

Learn for free, join the best tech learning community for a price of a pumpkin latte.

Annual
Monthly
Newsletter
$ 0 /mo

Event notifications, weekly newsletter

Delayed access to all content

Immediate access to Keynotes & Panels

Community
$ 8.34 /mo

Immediate access to all content

Courses, quizes & certificates

Community chats

Join the community (7 day free trial)