Skip to main content
added 95 characters in body
Source Link
Darrell
  • 101
  • 2

One big benefit of sqlite is that:

  • it's ACID guarantees. You can write to it transactionally. The file system - say if writing a json or xml file, is not transactional. If there is an error the file can be easily corrupted.
  • it's ability to query accross multiple tables. People above toubt the debugability of CSV, JSON files just because they are readable. But have you tried to query accross a json, csv or several csv files? With sqlite you can easily query however you like accross the entire data store.
  • it's decoupled from the concern of how you exchange the data between the apps. In other words; with many agreed data exchange formats like protobuf or grpc or json, the expectation is that these are served via an api. You may want instead want an asynchronous based process that doesn't require the two applications exchanging the data to be "online" at the same time. This means one dumps the data then something orchestrates the transfer then later the other processes the data. In this situation the file size is not necessarily as important as the other properties around ACID, portability and ease of getting in and out the data etc. You can do this with a set of json, xml, csv files instead but then you loose all the properties I mentioned above. Also the fact that the schema is described by the sqlite db is very handy - it can be passed between teams and they immediately understand the schema, data types, with much less effort that writing a document to describe the schema of say a set of json file, or having the write and publish a schema to describe the xml file format.

So all in all:

  • if you are writing a web api (i.e "online") mechanism to exchange or provide data, definitely use an agreed and commonly used standard for api's, like grpc, protobuf, json etc.
  • if this data is to be supplied to an end user or external customer / another business etc, sqlite is most probably not the friendliest. Think of a user downloading their data from Google or Facebook - a zip file with readabile csv or json files is more transparent to them.
  • if you are writing a meansan "offline" data exchange mechanism, where for example you need to dump data, in a form where it can be later processed somewhereby something else, preferably by an application you control (not someone else's externally) then I beleive sqlite is a great option.

One big benefit of sqlite is that:

  • it's ACID guarantees. You can write to it transactionally. The file system - say if writing a json or xml file, is not transactional. If there is an error the file can be easily corrupted.
  • it's ability to query accross multiple tables. People above toubt the debugability of CSV, JSON files just because they are readable. But have you tried to query accross a json, csv or several csv files? With sqlite you can easily query however you like accross the entire data store.
  • it's decoupled from the concern of how you exchange the data between the apps. In other words; with many agreed data exchange formats like protobuf or grpc or json, the expectation is that these are served via an api. You may want instead want an asynchronous based process that doesn't require the two applications exchanging the data to be "online" at the same time. This means one dumps the data then something orchestrates the transfer then later the other processes the data. In this situation the file size is not necessarily as important as the other properties around ACID, portability and ease of getting in and out the data etc. You can do this with a set of json, xml, csv files instead but then you loose all the properties I mentioned above. Also the fact that the schema is described by the sqlite db is very handy - it can be passed between teams and they immediately understand the schema, data types, with much less effort that writing a document to describe the schema of say a set of json file, or having the write and publish a schema to describe the xml file format.

So all in all

  • if you are writing a web api to exchange or provide data, definitely use an agreed and commonly used standard for api's, like grpc, protobuf, json etc.
  • if this data is to be supplied to an end user or external customer / another business etc, sqlite is most probably not the friendliest. Think of a user downloading their data from Google or Facebook - a zip file with readabile csv or json files is more transparent.
  • if you are writing a means to dump data, in a form where it can be later processed somewhere else, preferably by an application you control (not someone else's externally) then I beleive sqlite is a great option

One big benefit of sqlite is that:

  • it's ACID guarantees. You can write to it transactionally. The file system - say if writing a json or xml file, is not transactional. If there is an error the file can be easily corrupted.
  • it's ability to query accross multiple tables. People above toubt the debugability of CSV, JSON files just because they are readable. But have you tried to query accross a json, csv or several csv files? With sqlite you can easily query however you like accross the entire data store.
  • it's decoupled from the concern of how you exchange the data between the apps. In other words; with many agreed data exchange formats like protobuf or grpc or json, the expectation is that these are served via an api. You may want instead want an asynchronous based process that doesn't require the two applications exchanging the data to be "online" at the same time. This means one dumps the data then something orchestrates the transfer then later the other processes the data. In this situation the file size is not necessarily as important as the other properties around ACID, portability and ease of getting in and out the data etc. You can do this with a set of json, xml, csv files instead but then you loose all the properties I mentioned above. Also the fact that the schema is described by the sqlite db is very handy - it can be passed between teams and they immediately understand the schema, data types, with much less effort that writing a document to describe the schema of say a set of json file, or having the write and publish a schema to describe the xml file format.

So all in all:

  • if you are writing a web api (i.e "online") mechanism to exchange or provide data, definitely use an agreed and commonly used standard for api's, like grpc, protobuf, json etc.
  • if this data is to be supplied to an end user or external customer / another business etc, sqlite is most probably not the friendliest. Think of a user downloading their data from Google or Facebook - a zip file with readabile csv or json files is more transparent to them.
  • if you are writing an "offline" data exchange mechanism, where for example you need to dump data, in a form where it can be later processed by something else, preferably by an application you control (not someone else's externally) then I beleive sqlite is a great option.
Source Link
Darrell
  • 101
  • 2

One big benefit of sqlite is that:

  • it's ACID guarantees. You can write to it transactionally. The file system - say if writing a json or xml file, is not transactional. If there is an error the file can be easily corrupted.
  • it's ability to query accross multiple tables. People above toubt the debugability of CSV, JSON files just because they are readable. But have you tried to query accross a json, csv or several csv files? With sqlite you can easily query however you like accross the entire data store.
  • it's decoupled from the concern of how you exchange the data between the apps. In other words; with many agreed data exchange formats like protobuf or grpc or json, the expectation is that these are served via an api. You may want instead want an asynchronous based process that doesn't require the two applications exchanging the data to be "online" at the same time. This means one dumps the data then something orchestrates the transfer then later the other processes the data. In this situation the file size is not necessarily as important as the other properties around ACID, portability and ease of getting in and out the data etc. You can do this with a set of json, xml, csv files instead but then you loose all the properties I mentioned above. Also the fact that the schema is described by the sqlite db is very handy - it can be passed between teams and they immediately understand the schema, data types, with much less effort that writing a document to describe the schema of say a set of json file, or having the write and publish a schema to describe the xml file format.

So all in all

  • if you are writing a web api to exchange or provide data, definitely use an agreed and commonly used standard for api's, like grpc, protobuf, json etc.
  • if this data is to be supplied to an end user or external customer / another business etc, sqlite is most probably not the friendliest. Think of a user downloading their data from Google or Facebook - a zip file with readabile csv or json files is more transparent.
  • if you are writing a means to dump data, in a form where it can be later processed somewhere else, preferably by an application you control (not someone else's externally) then I beleive sqlite is a great option