Your question is too broad to be answered with code, but it's definitely possible.
You can easily search your elasticsearch for rows matching your full-text criteria.
Then get those rows' PK fields (or any other candidate key, used to uniquely identify rows in your PostgreSQL dB), and filter your django ORM-backed models for PKs matching those you found from your Elasticsearch.
Pseudocode would be:
def get_chunk(l, length):
for i in xrange(0, len(l), length):
yield l[i:i + length]
res = es.search(index="index", body={"query": {"match": ...}})
pks = []
for hit in res['hits']:
pks.append(hit['pk'])
for chunk_10k in get_chunk(pks, 10000):
DjangoModel.objects.filter(pk__in=chunk_10k, **the_rest_of_your_api_filters)
EDIT
To resolve a case in which lots and lots of PKs might be found with your elastic query, you can define a generator that yields successive 10K rows of the results, so you won't step over your DB query limit and to ensure best update performance. I've defined it above with a function called get_chunk.
Something like that would work for alternatives like redis, mongodb, etc ...