If you look at the GetNumOrderedNumDone method then you can see that you have repeated the column names three times. It can cause problem when you need to rename one of them, because you have to update it in multiple places.
There are several ways to fix this:
- Define
consts for column names
- Define a custom attribute, decorate the DTO's properties and use reflection to retrieve column names
- etc.
I will show you the first approach, but in order to get there we need to do some refactoring.
1st iteration
As discussed in the comments section if you change the RunQuery to receive a Func<DbDataReader, List<T>> parameter then the column indexes can be cached (on the map function implementation side) and be reused
public static List<T> RunQuery<T>(MyDbContext context, string query,
Func<DbDataReader, List<T>> map,
params SqlParameter[] parameters)
{
try
{
//...
if (cn.State.Equals(ConnectionState.Closed)) cn.Open();
using var reader = command.ExecuteReader();
var entities = reader.HasRows ? map(reader) : new List<T>();
if (oldState.Equals(ConnectionState.Closed) && cn.State == ConnectionState.Open)
cn.Close();
return entities;
}
catch (Exception e)
{
//...
}
}
- Since some responsibility has been shifted to the consumer of your helper method that's why the
DbDataReader usage became cleaner
- I've added here the
HasRows check to avoid calling map function unnecessary if there is nothing to map
Then your GetNumOrderedNumDone can be rewritten like this:
public static DtoNumOrderedNumDone GetNumOrderedNumDone(int jobTaskId)
{
const string QuantityColumnName = "Quantity", NumDoneColumnName = "NumDone";
var sql =
@$"select convert(int,numordered) as {QuantityColumnName} , convert(int,sum(isnull(numDone,0))) as {NumDoneColumnName}
from task k left outer join taskdone d on k.jobtaskid = d.jobtaskid
inner join job j on k.jobid = j.jobid
where k.jobtaskid = {jobTaskId}
group by k.jobtaskid, j.NumOrdered";
var dtos = DataHelpers.RawSqlQuery(sql, reader => {
int quatityColumnIndex = reader.GetOrdinal(QuantityColumnName);
int numDoneColumnIndex = reader.GetOrdinal(NumDoneColumnName);
return reader.Cast<IDataRecord>().Select(record => new DtoNumOrderedNumDone
{
NumDone = record.GetInt32(quatityColumnIndex),
Quantity = record.GetInt32(numDoneColumnIndex)
}).ToList();
}
);
var dto = dtos.FirstOrDefault();
return dto;
}
- I've defined two constants that are storing the column names
- Since you are already using string interpolation for your query it is really convenient to use these there as well
- Inside the
RawSqlQuery first I've cached the column indexes to be able to reuse them multiple times
- The second part of the map implementation is a bit tricker
- If you have a
DbDataReader then you can use the Read method to iterate through it or you can treat it as IEnumerable
- If you are unaware of this capability then I highly recommend to read these SO topics: 1, 2
- So, after we have converted the
DbDataReader to IEnumerable<IDataRecord> we can use Linq
2nd iteration
What is the problem with the first one?
The RunQuery is generic enough to be used for queries which might return 0, 1 or multiple rows. In your particular case the cardinality is 0..1. So, calling the ToList inside the RawSqlQuery then calling the FirstOrDefault on the returned collection seems a bit overkill.
So, if you introduce an overload for the RunQuery (and for the RawSqlQuery as well) which can return 0 or 1 entity then the usage could be more streamlined.
Let's start with the RunQuery
public static T RunQuery<T>(MyDbContext context, string query,
Func<DbDataReader, T> map,
params SqlParameter[] parameters)
{
try
{
//...
if (cn.State.Equals(ConnectionState.Closed)) cn.Open();
using var reader = command.ExecuteReader();
T entity = reader.Cast<IDataRecord>().Count() switch {
1 => map(reader),
0 => default,
_ => throw new InvalidOperationException("Query returned more than 1 rows")
};
if (oldState.Equals(ConnectionState.Closed) && cn.State == ConnectionState.Open)
cn.Close();
return entity;
}
catch (Exception e)
{
//...
}
}
- Here I've used the
Count() instead of HasRows to be able to handle each cases accordingly
- If there is only one record then it should call the
map function
- If there is zero matching record then it should return
default(T)
- If there are more than one matching records then it should throw exception
The usage could be simplified like this
public static DtoNumOrderedNumDone GetNumOrderedNumDone(int jobTaskId)
{
const string QuantityColumnName = "Quantity", NumDoneColumnName = "NumDone";
var sql =
@$"select convert(int,numordered) as {QuantityColumnName} , convert(int,sum(isnull(numDone,0))) as {NumDoneColumnName}
from task k left outer join taskdone d on k.jobtaskid = d.jobtaskid
inner join job j on k.jobid = j.jobid
where k.jobtaskid = {jobTaskId}
group by k.jobtaskid, j.NumOrdered";
return DataHelpers.RawSqlQuery(sql, reader =>
reader.Cast<IDataRecord>().Select(record => new DtoNumOrderedNumDone
{
NumDone = record.GetInt32(record.GetOrdinal(QuantityColumnName)),
Quantity = record.GetInt32(record.GetOrdinal(NumDoneColumnName))
}).FirstOrDefault()
);
}
- For the sake of brevity I've omitted the exception handling but you should do that
- Since we know that the
Select will be called only once that's why we do not need to cache the column indices
UPDATE #1: map(reader) problem
I have to confess that I have rarely used DbDataReader in the past decade so I have forgotten that Cast<IDataRecrod> creates a forward-only iterator. There is no real Reset function which can rewind that IEnumerable. So, after calling the Count method the iteration reaches its end. That's why it can't be reiterated.
There are several workarounds for that, like:
- Issue two select statements where the 2nd is
SELECT @@ROWCOUNT and then use NextResult to move the iterator to next result set
Load the DataReader into a DataTable and pass that around
- Use
SingleOrDefault on the map implementation side to enforce 0..1 cardinality
- Pass
IDataRecord to map instead of DbDataReader
- etc.
Let me show you the last two options
SingleOrDefault
public static T RunQuery<T>(MyDbContext context, string query,
Func<DbDataReader, T> map,
params SqlParameter[] parameters)
{
try
{
//...
if (cn.State.Equals(ConnectionState.Closed)) cn.Open();
using var reader = command.ExecuteReader();
var entity = map(reader);
if (oldState.Equals(ConnectionState.Closed) && cn.State == ConnectionState.Open)
cn.Close();
return entity;
}
catch (Exception e)
{
//...
}
}
- The
RunQuery is simplified since the cardinality enforcement is shifted to the map function
public static DtoNumOrderedNumDone GetNumOrderedNumDone(int jobTaskId)
{
const string QuantityColumnName = "Quantity", NumDoneColumnName = "NumDone";
var sql =
@$"select convert(int,numordered) as {QuantityColumnName} , convert(int,sum(isnull(numDone,0))) as {NumDoneColumnName}
from task k left outer join taskdone d on k.jobtaskid = d.jobtaskid
inner join job j on k.jobid = j.jobid
where k.jobtaskid = {jobTaskId}
group by k.jobtaskid, j.NumOrdered";
return DataHelpers.RawSqlQuery(sql, reader =>
reader.Cast<IDataRecord>().Select(record => new DtoNumOrderedNumDone
{
NumDone = record.GetInt32(record.GetOrdinal(QuantityColumnName)),
Quantity = record.GetInt32(record.GetOrdinal(NumDoneColumnName))
}).SingleOrDefault()
);
}
IDataRecord
NOTE: I have tested this approach with sqlite, but I hope this approach works as well with MSSQL as well.
public static T RunQuery<T>(MyDbContext context, string query,
Func<IDataRecord, T> map,
params SqlParameter[] parameters)
{
try
{
//...
if (cn.State.Equals(ConnectionState.Closed)) cn.Open();
using var reader = command.ExecuteReader();
var rawEntities = reader.Cast<IDataRecord>().ToList();
T entity = rawEntities.Count switch
{
1 => map(rawEntities.First()),
0 => default,
_ => throw new InvalidOperationException("Query returned more than 1 rows")
};
if (oldState.Equals(ConnectionState.Closed) && cn.State == ConnectionState.Open)
cn.Close();
return entity;
}
catch (Exception e)
{
//...
}
}
- Here we retrieve the
IDataRecords into a helper variable and make the branching on its Count property
- Please note that here we are passing the
IDataRecord to the map function not the reader
public static DtoNumOrderedNumDone GetNumOrderedNumDone(int jobTaskId)
{
const string QuantityColumnName = "Quantity", NumDoneColumnName = "NumDone";
var sql =
@$"select convert(int,numordered) as {QuantityColumnName} , convert(int,sum(isnull(numDone,0))) as {NumDoneColumnName}
from task k left outer join taskdone d on k.jobtaskid = d.jobtaskid
inner join job j on k.jobid = j.jobid
where k.jobtaskid = {jobTaskId}
group by k.jobtaskid, j.NumOrdered";
return DataHelpers.RawSqlQuery(sql, record => new DtoNumOrderedNumDone
{
NumDone = record.GetInt32(record.GetOrdinal(QuantityColumnName)),
Quantity = record.GetInt32(record.GetOrdinal(NumDoneColumnName))
});
}
I suggest the second approach since
- You can't enforce the
map implementor to call SingleOrDefault instead of First or FirstOrDefault
- The
map implementation is way more concise compare to the SingleOrDefault approach
- You are not leaking implementation detail (
DataReader) so the map implementor for example can't dispose the reader
- Your intent is better expressed since you are passing a single record to map to an entity type
UPDATE #2: Share RawSqlQuery method
public static T RawSqlQuery<T>(string query, Func<IDataRecord, T> map, params SqlParameter[] parameters)
{
try
{
using var context = MakeDbContext();
return RunQuery(context, query, map, parameters);
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
}
RunQuerywould receive aFunc<DbDataReader, List<T>>mapper then you can retrieve the indices only once and use it multiple times. The drawback is that yourRawSqlQueryusage would become a bit more complex and tedious. But at least you could avoid to have closure like you have now. \$\endgroup\$