For an API that deals with a large amount of data, you want to avoid loading all objects into memory. Bumping up the memory limit is only a stop-gap measure, as you said. A couple of approaches to handle this:
Pagination
Simple, but effective: Limit your API to return X objects per call with a parameter to control offset/current page. The Element API plugin supports this by default with the paginate parameter.
If you still need all objects at once on the client side, you can use multiple parallel requests to get all pages and then glue them back together with JavaScript.
Limiting scope
Do you really need 1300 elements at once? Of course it depends on what you're doing with those. Since you said the entries contain geojson, I'm assuming they're related to locations or areas on a map? In this case, a simple way to limit the amount of elements you return per request would be to add mininum and maximum latitude and longitude parameters to your request and only return elements within those bounds. Then use JavaScript to detect map panning and dynamically load new elements as they come into view.
Of course, this isn't applicable to all situations, but its a good question to ask. If you're building some kind of store locator - nobody needs a forest of 1300 map markers.
Streaming responses
If you really need to serialize and send thousands of elements in a single request, one way to avoid hitting memory limits is to process the elements one by one. Instead of loading all serialized elements into a huge array and then encoding it as JSON, load one element at a time, serialize and send it to the client before loading the next element.
I don't think this is supported by Element API by default, and it doesn't work well with JSON. You could think about using something like ND JSON, though this would require some special care on the client-side and probably a custom serializer as well. At this point, it might be easier to build a custom controller for your API.