I'm writing a proof of concept database system where by I'd like to define some base implementation of a database table and rows, and then derive from a base class that represents a table and a row. So, for example I have laid out in my project --
public interface ITable
{
void InsertRow(BaseRow row);
}
public abstract class BaseRow { }
public class ConcreteRowA : BaseRow {}
public class ConcreteRowB : BaseRow {}
public class ConcreteTableA : ITable
{
public void InsertRow(BaseRow row)
{
if (row is ConcreteRowA)
{
var RowA = (ConcreteRowA)row;
}
// do stuff specific to a RowA type
}
}
public class ConcreteTableB : ITable
{
public void InsertRow(BaseRow row)
{
if (row is ConcreteRowB)
{
var RowB = (ConcreteRowB)row;
}
// do stuff specific to a RowB type
}
}
I'm trying to avoid having to do the check and cast for a type in my implementation, so it would be nice if instead --
public class ConcreteTableA : ITable
{
public void InsertRow(ConcreteRowA row)
{
// do stuff specific to a RowA type
}
}
public class ConcreteTableB : ITable
{
public void InsertRow(ConcreteRowB row)
{
// do stuff specific to a RowB type
}
}
But I understand that when I implement an interface that it doesn't work this way. Is that correct, or am I misunderstanding how to use abstract classes and interfaces? Is there a better way to implement what I'm trying to do?
In essence I would like to enforce classes ConcreteTableA and ConcreteTableB the same type of method "InsertRow" but accept a parameter of a type specific to their implementation (ConcreteRowA or ConcreteRowB). Is there a flaw in my design? I feel like I must be violating some principle.
Aucun commentaire:
Enregistrer un commentaire