samedi 30 avril 2022

Factory method design pattern Typescript

im just learning design pattern and i do not know whats the benefit of abstraction in below code: i rewrite the code and still work the way we want.

you can see the original code from here

class Creator {
  public factoryMethod() {}

  public someOperation(): string {
    const product = this.factoryMethod();
    return `Creator: The same creator's code has just worked with ${product}`;
  }
}

class ConcreteCreator1 extends Creator {
  public factoryMethod(): string {
    return '{Result of the ConcreteProduct1}';
  }
}

class ConcreteCreator2 extends Creator {
  public factoryMethod(): string {
    return '{Result of the ConcreteProduct2}';
  }
}

function clientCode(creator: Creator) {
  console.log(
    "Client: I'm not aware of the creator's class, but it still works."
  );
  console.log(creator.someOperation());
}

console.log('App: Launched with the ConcreteCreator1.');
clientCode(new ConcreteCreator1());
console.log('');

console.log('App: Launched with the ConcreteCreator2.');
clientCode(new ConcreteCreator2());

Passing a member variable as a parameter in a method of the same class

I'm developing a game with Unreal Engine and I have a class that represent something (a pawn) that I can move inside the game's level. I'm using Spherical coordinates to move it.

This class has a method to convert its spherical coordinates into a Cartesian location because Unreal uses Cartesian location to place pawns inside the level.

A piece of the class is:

struct SphericalCoordinates
{
    float Radius;
    float Azimuth;
    float Zenit;
};

class MyClass
{
public:
    // Convert a spherical coordiantes to cartesian coordiantes.
    FVector SphericalToCartesian(SphericalCoordinates Spherical) const;


private:

    SphericalCoordinates SphereCoors;
    
}

The private member SphericalCoordinates SphereCoors is the one that I'm going to pass to the method SphericalToCartesian as its parameter. In other words, inside the same class, MyClass I'm going to call it in this way:

SphericalToCartesian(SphereCoors);

I use the member variable SphereCoors to store the spherical coordinates of the class instead of compute it every time I need it.

Is it a good idea to pass a member variable as a parameter in a method of the same class?

Of course, I think I can move this method to another class because it only does coordinates transformations, but I think it is a good question to ask if it is a good design to pass a member variable as a parameter in a method of the same class.

Template method pattern with subclasses that have different methods

Let's consider two objects that have a lot in common, but that differ on some minor points and therefore both need to have their own method:

public class ParentClass{

    private int var1, var2, ..., varN; // many variables in common

    public int method1(){ ... };
    [...]
    public int methodN(){ ...}; // many methods in common

}

public class ChildClass1 extends ParentClass{

   // no exclusive variable

    public void specificMethod1(){ ... }; // specific method

}

public class ChildClass2 extends ParentClass{

    // no exclusive variable

    public void specificMethod2(){ ... }; // specific method

}

Is there a "standard" way to deal with such problems? I am currently learning design patterns so I thought that template method could be useful here by doing something like that:

public class ParentClass{

    private int var1, var2, ..., varN; // many variables in common

    public int method1(){ ... };
    [...]
    public int methodN(){ ...}; // many methods in common

    public abstract void specificMethod1(); // specific method
    public abstract void specificMethod2(); // specific method

}

public class ChildClass1 extends ParentClass{

    @Override
    public void specificMethod1(){

        // implementation
    }

    @Override
    public void specificMethod2(){
        
        throw new RuntimeException("this method can only be used by ChildClass2!");
    }

}

public class ChildClass2 extends ParentClass{

    // Same here but implements specificMethod2 and throw exception in specificMethod1

}

Is that a good way of dealing with the problem? Is there another design pattern that suits better here? Or even just another approach that I didn't think about?

EDIT: Of course, I could declare every specific method in the associated child class, but at compile time, I have no idea whether I will need the first or the second class (and I am sure that dynamic casting is not a good idea here neither), therefore I am using the parent class everywhere.

make an python program to print the following

enter image description here

This the is the output i have to print please do it as soon as possible Thank you for your help

Can anyone give me a reference to architectural design to any node.js project?

I have been recently asked to provide architectural design/diagram of a backend nodejs project that I am sole developer of. Having never designed such architectural design/diagram, I am confused how and where to start. Any reference to architectural design/diagram of any existing project( preferably node.js ) will be really helpful.

Best way to retrieve the objects from variable arguments

I have an interface:

public interface Animal
{
   public Object info(Object ... o);
}

One of it's implementation :

public class Cat implements Animal
{
   public Object info(Object ... o)
   {
     //Extracting the information passed
     String a1 = (String)o[0];
     int b1 = (int)o[1];
     boolean c1 = (boolean)o[2];
   }
}

The datatype of a, b and c is always String, int and boolean respectively from place where the info method of Cat is called.

My question was that is their some neater way of extracting these objects and typecasting into the original form ?

Following solution would have been best but it cannot be implemented in java :

public class Cat implements Animal
{
  public Object info(String a1, int b1, boolean c1)
  {
    
  }
}

vendredi 29 avril 2022

Bash command to search pattern (sequence) and print everything what's next to the pattern (to the right and left side)

I'm trying to reconstruct a gene sequence based on a PoolSeq file of a population (fasta format) and a conserved area. I want to search the file for matches with this sequence and then build up the neighboring area starting from that conserved sequence.

So I basically need a Bash command to search a fasta file for a sequence segment and to print the neighboring region of the match in every read.

File: Fasta file of dieverse Individuals of a species

Input: 20-30 bp Sequence

Output: All reads with that sequence and the neighboring region in that read

jeudi 28 avril 2022

Interface Exploding in Adapter Pattern

I have several classes with different interfaces and now I want to unify them. Here is an example in C++

class Add {
public:
    void setOperand1(int o1) { operand1 = o1 };
    void setOperand2(int o2) { operand2 = o2 };

private:
    int operand1;
    int operand2;
}

class Negate {
public:
    void setOperand(int o) { operand = o; };

private:
    int operand;
}

I want to unify the interfaces of the two. The adapter design pattern comes to my mind. While the drawback is that the adapter interface must be the union of the two adaptees, which may cause the interface exploding if later more operator class added in.

My question, is this a best practice to use Adapter in similar scenario? And any alternatives to make it interface exploding proof? Thank you.

Basic design question having two different implementation of an interface whics falls under same domain

Could be silly question but needed thoughts.

Need to design two different implementation of an interface which both updates a configuration in application but the config getting updated are different.

The first update gets triggered whenever any application created/deleted/updated the config related to those apps are update in a data.json file which is read by 3rd party.Trigger for this update is straight forward and sync call.

Second config update gets triggered from multiple user operations, instead calculating the diff in config , the approach we took to regenerate config on interval based on some flag with a scheduler in async manner

Now the problem is have two different sets of implementation for an interface with multiple arguments. To solve this added the functional Interface with one method which parameter as Object...args. All the classes implementing this has the job to update the configuration in third party as per required. Is this right approach as per design.

How to get advantage of types when having astract DAO class

So imagine that we have DAO interface that have the structure like this:

class interface UserDao<T> {
  findOne(query: any, projection: any): Promise<T>;
  findAll(query: any, projection: any): Promise<T[]>;
  create(item: any): Promise<T>;
  update(item: any): Promise<T>;
  delete(query: any): Promise<boolean>;
}

and we have a mongoDB implementation like this:

class MongoUserDao implements UserDao<User> {
  findOne(query: FilterQuery<User>, projection: ProjectionQuery<User>): Promise<User>;
  findAll(query: FilterQuery<User>, projection: ProjectionQuery<User>): Promise<User[]>;
  create(item: User): Promise<User>;
  update(item: User): Promise<User>;
  delete(query: FilterQuery<User>): Promise<boolean>;
}

and method (to be platform agnostic we are using the interface)

async function makeSomeChangesToUser(userDao: UserDao) {}

Problem: As we see, the interface that we are using in the function limits IntelliSense to type-hint the types of the parameters to any and makes MongoUserDao types useless. If we swap it to userDao: MongoUserDao we are not platform-agnostic anymore, and UserDao is useless. How can I make it platform-agnostic and still get the advantage of the types?

How to apply seperation in one service for multiple tasks?

I have a service method that does so many things.

public Result DoSomething(){
    var queryResult = service.GetResult();
    
    SaveResultToRedis(queryResult);
    logger.Log($"this data saved in redis successfully {queryResult.Id}");
    
    AddSomethingToKafka(queryResult);
    logger.Log($"this data saved in kafka successfully {queryResult.Id}");
    
    logger.Log($"this data response is success {queryResult.Id}");
}

In this stuation,

  • if redis or kafka fails, the request response will fail.
  • if logger service fails, the request response will fail.
  • if I put all logics in try catch blocks, code will appear so bad.

Which way may apply in this stuations? Is there any design pattern approaches or else?

How to keep the size of the visitor classes manageable?

I've got an interesting issue on the recent project that I can't my wrap my head around. The code below is Java, but the question itself is pretty language-agnostic.

Our application's goal is to store user profile consisting of key-value pairs. The number of key-value pairs is not predefined. The application receives incremental updates on field level, similar to:

// This will increment property `numberOfLogins` of user `123` by 2.
{
    "userId": "123",
    "operation": "increment",
    "property": "numberOfLogins",
    "incrementByValue": 2
}

We also experimenting with multiple storage backends, so we decided to use Visitor pattern:

public interface UserProfileUpdate {
    void accept(UserProfileUpdateVisitor visitor);
}

public class IncrementUpdate implements UserProfileUpdate {
   
    ... // Data fields according to the particular update format
 
    @Override
    public void accept(UserProfileUpdateVisitor visitor) {
        visitor.visit(this);
    }
}

The visitor itself:

public interface UserProfileUpdateVisitor {
    void visit(IncrementPropertiesUserProfileUpdate userProfileUpdate);

    void visit(ReplacePropertiesUserProfileUpdate userProfileUpdate);

    void visit(CollectPropertiesUserProfileUpdate collectPropertiesUserProfileUpdate);
}

So far so good, to hide all the details how updates are processed by different storage backends we can implement visitors:

public class MongoDBUserProfileUpdateVisitor implements UserProfileUpdateVisitor {
    @Override
    public void visit(IncrementPropertiesUserProfileUpdate update) {
       // ...
    }

    @Override
    public void visit(ReplacePropertiesUserProfileUpdate update) {
        // ...
    }

    @Override
    public void visit(CollectPropertiesUserProfileUpdate update) {
        // ...
    }
}

The issue is that visitor classes quickly started to be quite huge and hard to test. To overcome this we have to extract each visit() method to its own class, which leads to:

public class MongoDBUserProfileUpdateVisitor implements UserProfileUpdateVisitor {
  @Override
    public void visit(IncrementPropertiesUserProfileUpdate update) {
       incrementPropertiesUserProfileUpdateMongoDBProcessor(update);
    }

    @Override
    public void visit(ReplacePropertiesUserProfileUpdate update) {
        replacePropertiesUserProfileUpdateMongoDBProcessor(update);
    }

    @Override
    public void visit(CollectPropertiesUserProfileUpdate update) {
        collectPropertiesUserProfileUpdateMongoDBProcessor(update);
    }
}

So my questions:

  • Is there any better way to optimize structure of the visitor in such case?
  • Is visitor pattern a good choice here in the first place?

Thanks in advance!

Repository and UnitOfWork patterns using transactions

First of all I work with C# (ASP.Net Core 3.1 in this case) but the question is more generic about design pattern.

So I ran into an issue using the repositories pattern. I needed to have transactions for my repositories and I found the UnitOfWork pattern that I was not aware of. I implemented it in the way that while EF Core SaveChanges is like a transaction itself when there is only one call to SaveChanges (which should be the case with the UnitOfWork if possible) then I don't need to have a "proper transaction". All was working fine than I need to do something basic : I create an entity in the DB and then create an email in which I need the ID of this entity.

The process is the following :

  • Create the entity
  • Create the email content
  • Save the entity
  • Send the email

The issue is that the ID of the entity is generated on the insert in the DB and if I call SaveChanges after creating the email content the ID is not set. But if the creation of the email content fail I don't want the insert to be done in the DB. So I was like "hum, I really need a transaction after all". So I did the following in my UnitOfWork:

public async Task OpenTransaction()
{
    if (_dbTransaction == null)
    {
        _dbTransaction = await _dbContext.Database.BeginTransactionAsync().ConfigureAwait(false);
    }
}

/// <inheritdoc />
public async Task CommitTransaction()
{
    if (_dbTransaction != null)
    {
        try
        {
            await _dbTransaction.CommitAsync().ConfigureAwait(false);
        }
        catch (Exception)
        {
            await _dbTransaction.RollbackAsync().ConfigureAwait(false);
            _dbTransaction.Dispose();
            _dbTransaction = null;
            throw;
        }
    }
    
    _dbTransaction.Dispose();
    _dbTransaction = null;
}

And the my process become the following :

  • Open a transaction
  • Create the entity
  • Create the email content
  • Commit the transaction

And it's working but I'm afraid that I introduced some kind of "anti-pattern".

Does someone had a similar issue? What do you think of the solution I choose? Is there a better way or another pattern to solve this ?

Thanks a lot

JavaScript Design Patterns - Method which behaves as a manager and call different methods

Introduction

I have different features in my app, like:

  1. Reset password (forgot password)
  2. Update user password
  3. Sign up
  4. Become premium

For each feature, when success, I want to send a custom HTML email to the user.

Generation of the emails

As the HTML main structure is the same (only the content (texts) changes), I have decided to implement my helper method generateEmailTemplate():

const { HEAD, BODY } = require("../utils");

module.exports = (main) => `
  <!DOCTYPE html>
  <html lang="en">
    ${HEAD}
    ${BODY(main)}
  </html>
`;

And, in order to include the specific data to the generated template, I am using different methods for each feature:

const signUpWelcomeEmailTemplate = (name) => 
  templateGenerator(`<p>Welcome, ${name}</p>`);

const premiumEmailTemplate = (name) =>
  templateGenerator(`<h1>${name}, you are now premium!</h1>`); 

// The html is more complex, I have reduced it for simiplicity

Sending emails

In order to send the emails to the users, I have the following method, which adds an email record to my database. Then, when the record is added, a custom extension connected to the MailGun service sends the mail via SMTP.

This is the implementation of my method:

async function sendEmail(
  to,
  subject,
  text = undefined,
  html = undefined
) {
  const mailsRef = db.collection("mails");

  const mail = {
    to,
    message: {
      subject,
      ...text && { text },
      ...html && { html },
    },
  };

  await mailsRef.add(mail);

  functions.logger.log("Queued email for delivery!");
};

Problem

Now, in order to use this method, in each feature, I have to do the following:

async function goPremium(user) {
   try {
     await purchasePremium(user.uid);

     const html = premiumEmailTemplate(user.name);

     sendEmail(user.email, "Premium purchase success!", undefined, html);
   } catch(err) {
     ...
   }
}

I am looking for a pattern which generalizes this call, I mean, some kind of email manager.

I have thought about two different ways to refactor this code:

#1 (I don't like this way, as the method might be super long, I mean, imagine 100 features...)

   function emailManager(user, type) {
     switch(type) {
       "welcome":
         sendEmail(user.email, "Welcome!", undefined, welcomeEmailTemplate(user.name));
         break;

       "premium":
         sendEmail(user.email, "Premium purchase success!", undefined, premiumEmailTemplate(user.name));
         break;

       default:
         break;
     }
   }
  1. Just create different methods and export them from a 'central' module.
...

const sendPremiumEmail = (user) => {
  const title = "Premium purchase success!";
  const html = premiumEmailTemplate(user.name);
  
  return sendEmail(user.email, title, undefined, html);
};

...

module.exports = {
  sendWelcomeEmail,
  sendPremiumEmail,
  ...
}

But... maybe there is another way, a pattern which exactly solves this situation. Any ideas?

mercredi 27 avril 2022

How to architect an embedded system with multiple input and output capabilities. Some based on hardware, some on software settings

I have an ESP8266 project programmed in the Arduino framework that gathers data from the network and then displays on a display. The device can be built with a few different display hardware types (eink, led, oled). These are set at compile time with #defines. However there are also a few different type of data and data transport mechanisms that can be used. Some require hardware (LoRa TX/RX) and are enabled at compile time but some can be changed at runtime based on user settings (eg. HTTP or MQTT).

I'm already using a factory design pattern to instantiate the Data transport object at runtime but still use compile time build flags to select which display hardware to use. I have a Display class, a Datasource class and a Config class. This has worked well but is now reaching its limit as I try to add Cellular functionality.

I wonder if there is a good design pattern / architecture design that will facilitate this kind of flexibility without having to keep adding more and more intrusive #ifdef statements all over my code.

Attached is a little mind map of the basic layout of possibilities of this device.enter image description here

Are there any specialized databases for event-sourcing?

I am very interested in the event-sourcing pattern and I would like to know if there are any databases that can help to use this pattern.

mardi 26 avril 2022

Business logic inside of value object

I think value object should not have business logic.

That maybe confused other programmers.

for exmaple,

public class PersonVO {

    private String name;
    private int age;

    public void somethingBusinessLogic() {
        // Do very complecated logic -> Using Reflection, Conversion
    }
    
}

If i use this VO, have to look vo logic how to work.

many of programmers put their business logic inside of VO.

I wonder what is best practice?

Game Development Design Patterns

I'm fairly good at programming and can make some pretty advanced stuff, but I suck at design patterns and making things very extendable, But I always get overwhelmed on projects to the point where I just quit because my code is so messy.

So lets say there's an item in your game, you can grab it, drop it, and throw it. I also want there to be tools, lets say a grapple hook so you can grab an item from farther. And something that is hurting my brain the most, lets say you can get a power up to throw an item farther.

This is what is going through my mind. You can grab, drop, and through and item, that simple and easy. Then when you have a tool equipped like a grappler, you can grab it from farther by grabbing the item with a farther grab distance. But then when you throw an item, I would just check if I have a power up and if I do then I can throw it farther. But I want this to be easily extendable, so what would happen if I had to check through other power ups to, well then the code starts getting messier and messier.

What would be the cleanest/most-extendable way to design this?

lundi 25 avril 2022

Can you let your application fan-out instead of SNS fan-out?

I currently have an application that sends messages to the SNS topic. And there are three SQS queues that are subscribed to the topic. I am trying to eliminate the usage of SNS from my architecture because of cost. Is it possible that my application itself acts like SNS and fan-out messages to SQS without the usage of SNS? If there are any drawbacks, what are those?

What are the main reasons for using an interface in the decorator design pattern?

interface Dress {
  public void assemble();
}

class BasicDress implements Dress {
  @Override public void assemble() {
    System.out.println("Basic Dress Features");
  }
}

class DressDecorator implements Dress {
  protected Dress dress;

  public DressDecorator(Dress c) {
    this.dress = c;
  }

  @Override public void assemble() {
    this.dress.assemble();
  }
}

class SportyDress extends DressDecorator {
  public SportyDress(Dress c) {
    super(c);
  }
  
  @Override public void assemble() {
    super.assemble();
    System.out.println("Adding Sporty Dress Features");
  }
}

class FancyDress extends DressDecorator {
  public FancyDress(Dress c) {
    super(c);
  }
  
  @Override
  public void assemble() {
    super.assemble();
    System.out.println("Adding Fancy Dress Features");
  }
}

public class DecoratorPatternTest {
  public static void main(String[] args) {
    Dress sportyDress = new SportyDress(new BasicDress());
    sportyDress.assemble();
    System.out.println();
    
    Dress fancyDress = new FancyDress(new BasicDress());
    fancyDress.assemble();
    System.out.println();  
  }
}

Why do we need an interface in the decorator design pattern?

How to design an interface to call functions with a parameter of varying type polymorphically?

Say we want to be able to call functions run of ImplementationA and ImplementationB polymorphically (dynamically at runtime) in this example:

struct Input {};

struct MoreInputA {};
struct ImplementationA {
    void run(Input input, MoreInputA more_input);
};

struct MoreInputB {};
struct ImplementationB {
    void run(Input input, MoreInputB more_input);
};

Both take some Input in the same format, but some MoreInput in different formats. ImplementationA and ImplementationB can only do their job on their specific MoreInputA and MoreInputB respectively, i.e. those input formats are fixed. But say MoreInputA/B can be converted from a general MoreInput. Then a simple polymorphic version could look like this:

struct Input {};
struct MoreInput {};

struct ImplementationBase {
    virtual void run(Input input, MoreInput more_input) = 0;
};


struct MoreInputA {};
MoreInputA convertToA(MoreInput);

struct ImplementationA : public ImplementationBase {
    void run(Input input, MoreInputA more_input);
    void run(Input input, MoreInput more_input) override {
        run(input, convertToA(more_input));
    }
};

// same for B

However, now the more_input has to be converted in every call to run. A lot of unnecessary conversions are forced on a user of the polymorphic interface if they want to call run repeatedly with varying input but always the same more_input. To avoid this, one could store the converted MoreInput inside of the objects:

struct Input {};
struct MoreInput {};

struct ImplementationBase {
    virtual void setMoreInput(MoreInput) = 0;
    virtual void run(Input input) = 0;
};


struct MoreInputA {};
MoreInputA convertToA(MoreInput);

struct ImplementationA : public ImplementationBase {
    void run(Input input, MoreInputA more_input);

    MoreInputA more_input_a;
    void setMoreInput(MoreInput more_input) override {
        more_input_a = convertToA(more_input);
    }

    void run(Input input) override {
        run(input, more_input_a);
    }
};

// same for B

Now it is possible to do the conversion only when the user actually has new MoreInput. But on the other hand, the interface is arguably more difficult to use now. MoreInput is not a simple input parameter of the function anymore, but has become some sort of hidden state of the objects, which the user has to be aware of.

Is there a better solution that allows avoiding conversion when possible but also keeps the interface simple?

dimanche 24 avril 2022

What is the best way to use sx prop in MUI v5?

I have started using MUI v5 with makeStyles in my previous project. After deploying, I faced a huge delay in loading page's CSS. So I started searching that found out makeStyles is deprecated in MUI v5.

MUI suggests to use sx prop instead. That's fine. But the problem here is I don't want to write my JSX and CSS/JSS code together. For example:

This is what MUI says:

// App.js
function App() {
  return (
    <Grid sx=>
      This is a test!
    </Grid>
  );
}

export default App;

Below is somehow what I expect:

// style.js
export default {
  myGrid: {
    bgcolor: 'yellow',
    mx: 5,
    pt: 2,
    border: 1,
  },
};
// App.js
import style from "./style";

function App() {
  return <Grid sx={style.myGrid}>This is a test!</Grid>;
}

export default App;

I wanna know what is the best pattern to use JSS and JSX files independently with sx? Is it possible to get VSCode suggestions while typing sx props in another file?

samedi 23 avril 2022

Program using nested for loops to print string pattern is not working. What code do I use for the loop to make it print the required pattern?

This is the pattern I'm trying to print: Required pattern

This is the code I have written:

                    for(int c = s.length; c >= 1; c--)
                    {
                        for (int d = c; d >= 0; d--)
                        {
                            char y = s.charAt(d);
                            System.out.print(y);
                        }
                        System.out.println();
                    }

Sadly, the program isn't compiling. What do I change in this code to make it print the required pattern?

A virtual graffiti wall that our customers can sign & doodle on

Firstly, please accept my apologies in advance if I’ve posted this in the incorrect place. I’m a total newbie to this community and I’m hoping to see if somebody could point me in the direction of how to do something to make this idea of mine a real thing!

Say if I wanted to add a virtual whiteboard that my customers can sign and doodle on. Anybody can add to the whiteboard and can’t edit / scribble over somebody else’s note or diddle. As more people add their piece to this virtual whiteboard I want it to expand so the amount of people who can add a piece to it would be unlimited.

Any ideas here where I might even begin researching this!

Thanks for your help

Optimal design patterns for reading and restructurize datasets with different formats?

Problems:

  1. (I want to solve this) Homogenize image datasets with different formats: HDF5, folder images (with different structures), etc.

  2. (Just to give you context) Then, the datasets are concatenated, preprocessed by a client code and stored in a HDF5 file with a defined/fixed structure.

My solution to 1:

Use the template pattern as the following pseudo-UML shows:

pseudo-UML diagram

Noticed drawbacks of this solution to 1:

  1. Client code needs to be changed each time a new dataset comes into play because it doesn't know which ConcreteStructurizer should use for a given dataset, I mean, the client does something like that:
if datset_0 use ConcreteStructurizerFolder
ConcreteStructurizerFolder(cfg_dataset_0).reorganize()
.
.
.
if dataset_n use ConcreteStructurizerHDF5
ConcreteStructurizerHDF5(cfg_dataset_n).reorganize()

Could you propose a better/optimal approach/design pattern?

PD: I am learning software design (physics background), I'd be grateful if you could provide a pedagogical/well explained answer, thanks.

How to use an enumeration and pattern for phone in xsd?

The problem is requesting the following. That the phone should have the attribute type with 3 enumerated values of home, cell, and work. AND the phone should also be restricted to a phone format of (###) ###-####. How do you combine the attribute for enumerated vales AND apply a restricted pattern? XML example:

    <donor level="founder">
      <name>David Brennan</name>
      <address>5133 Oak Street
           Windermere, FL  34786</address>
      <phone type="home">(407) 555-8981</phone>
      <phone type="cell">(407) 555-8189</phone>
      <email>dbrennan@delisp.net</email>
      <donation>50000.00</donation>
      <method>Phone</method>
      <effectiveDate>1982-09-01</effectiveDate>
    </donor>

xsd code I have so far for phone:

    <xs:attribute name="type" type="pType" />

       <xs:simpleType name="phoneType">
          <xs:restriction base="xs:string">
               <xs:pattern value="\(\d{3}\)\s\d{3}-\d{4}" />
          </xs:restriction>
       </xs:simpleType>

   <xs:element name="phone">
     <xs:complexType>  
        <xs:simpleContent>
            <xs:extension base ="xs:string">
                <xs:attribute ref="type" use="required"/>
            </xs:extension>
        </xs:simpleContent>
    </xs:complexType>
</xs:element>

<xs:simpleType name="pType">
    <xs:restriction base="xs:string">
        <xs:enumeration value="home" />
        <xs:enumeration value="cell" />
        <xs:enumeration value="work" />
    </xs:restriction>
</xs:simpleType>

<xs:element name="donor">
    <xs:complexType>
        <xs:sequence>
            <xs:element ref="name"/>
            <xs:element ref="address"/>
            <xs:element ref="phone" minOccurs="1" maxOccurs="unbounded"/>
            <xs:element ref="email" minOccurs="0" />
            <xs:element ref="donation" />
            <xs:element ref="method" />
            <xs:element ref="effectiveDate" />
        </xs:sequence>
        <xs:attribute ref="level" />
    </xs:complexType>
</xs:element>

</xs:schema>

vendredi 22 avril 2022

C++ Reversing the operation of type erasure

The example code which I write as part of this question probably will seem contrived and without much useful purpose, but that is because it is a minimal example rather than a convoluted code which doesn't convey the question succinctly.

I want to reverse the operation of type erasure. I assume there isn't a design pattern for doing this - if there is I either don't know of it or haven't realized how it can be used for this purpose.

Consider the following type-erasure.

class Base
{

}

template<typename T>
class Typeless : Base
{
    T data;
}

The purpose of this is to store base class pointers in a container.

std::container<Base&> container;

The type has been erased.

However, elsewhere in my code I want to provide static type enforcement.

template<typename U>
class Reference
{
    U &external_ref_to_object;

    Reference(U &ref)
        : external_ref_to_object(ref)
    {}
}

Base *p_tmp = new Typeless<int>;
container.push_back(p_tmp);
Reference<int> r(p_tmp);

The point here is that although container can contain objects of any type due to the type-erasure pattern which has been utilized, the Reference class should enforce the correct type to be used.

If this example is confusing, more context might be helpful. I am writing something which is not too dissimilar from a database application. container is basically a collection of all the data to be managed, regardless of what type the data is. (It avoids having a container<T> for each unique T as shown below.)

// avoid this:
std::container<int> all_integer;
std::container<float> all_floating_point;
std::container<std::string> all_text;
std::container<void*> anything_else;

Hence the type erasure.

The purpose of "getting the type back again" is to enforce all database columns to contain objects of the same type.

DBColumn<int> my_column_contains_int_type_data;
my_column_contains_int_type_data.insert(42);

PS: Try not to be distracted by the fact this code clearly does not compile. It is intended to be a sketch to demonstrate the question.

As a final comment, it occurred to me that this is vaguely similar to a factory pattern. (Although not quite the same.) We already have the object. We don't need to clone it, only store a reference to it. We don't need to load anything from user input, network, disk or other external source. So it isn't a creational factory and it isn't a clone factory either.

Factories can use some form of centrally managed and generated unique id's to indicate what type an object is. For example, one might save and load from disk data with an identifier in the header which indicates what type of object is represented by the data on disk and therefore how an object should be created dynamically.

In my case, it would be possible to have such an identifier for each derived class (each unique T in Typeless<T>) however this doesn't seem like a particularly good solution, as I fear I may write the following block of code.

class DBColumn<T>
if(identifier == "INT")
{
    if(typeof(T) is int)
    {
        // good
    }
    else
    {
        throw "BAD"; // bad
    }
}
else if(...) // repeat for each of int, float, double, std::string, etc

Hopefully the question is fairly clear?

Keep/ Change Colnames via a specific pattern

could you help me with this little problem:

My colnames look like :

...
    [121] "14.11.1838"      "21.11.1838"      "30.11.1838"      "07.12.1838.1"      "14.12.1838"     
    [126] "21.12.1838"      "NA"              "31.12.1838"      "01.01.1839"      "02.01.1839"     
    [131] "03.01.1839"      "04.01.1839"      "NA.1"      "07.01.1839.1"      "08.01.1839"     
....

In total I have more than 500 names. Now I want to delete all the ".1" when they are at the end of a name (date). And I want to delete all names (the hole column) containg "NA"

Thanks

jeudi 21 avril 2022

Benefit of using opcodes VS human-readable functions/variables

I'm currently un-dusting an old adventure game engine I had last worked on back in 2019. The engine itself doesn't use an interpreter (there's no special script compiler producing bytecode) but instead compiles into Go code and then into native binaries from there. Technically there would therefore be no need for opcodes since, unlike many earlier adventure engines, there's no special interpreter/virtual machine.

I still question the wisdom of those earlier designs (Lucas Arts' SCUMM and Infocom's Z-Machine come to mind) that all implemented opcodes for common engine behavior.

My engine uses two parts. The engine itself and a small over-the-wire protocol that enables developers to implement GUIs in whatever language they want as long as it can connect via TCP. Currently, that protocol just sends the player input (it's a text adventure, although for later implementations I want to add mouse support) right back to the "backend" in clear text. However, there's also other actions aside of { "input" : "" }.

Aside of making things harder to disassemble (but also harder to debug) and enabling the development of custom script languages that differ from my reference implementation, would there be any benefit to replacing things like "input", "launch" or "quit" with opcodes like 0xBA, 0xBB, etc (perhaps also obfuscating the input values), etc?

Inside the engine itself I use public structs like this to define items, locations and characters.

Would I gain anything by replacing this human readable format with opcodes or is that very idea archaic?

type Item struct {
    Article     string    `json:"article"`
    Name        string    `json:"name"`
    Description string    `json:"description"`
    Verbs       ItemVerbs `json:"-"`
    Synonyms    []string  `json:"synonyms"`
    Moveable    bool      `json:"moveable"`
    Carryable   bool      `json:"carryable"`
    Attributes  []string  `json:"attributes"`
    Fixture     bool      `json:"fixture,omitempty"`
}

How to design a RAII wrapper to be used as a base class?

I had the idea to use std::unique_ptr as a base class for another class that manages some kind of external resource.

Let's assume my program needs to manage resources provided by a C-like library via the following functions: R* lib_create_resource(A* allocator); and void lib_destroy_resource(R* resource, A* allocator);. I would like to create a class that manages this resource, so I thought that using std::unique_ptr as a base class would be a good idea. Here's a pseudo-implementation:

struct Resource: public std::unique_ptr<R, std::function<void(R*)>> {
    Resource(): 
        unique_ptr{
            lib_create_resource(&my_allocator),
            [this](R* r) { lib_destroy_resource(r, &my_allocator); }
        }
    { }

    /* Other functions that manipulate the resource */

private:
    A my_allocator;
};
  • Why am I using std::unique_ptr as a base class rather than as a non-static member? Because I would like to expose std::unique_ptr's methods, such as get(), reset(), operator bool(), etc., and this way I don't need to manually re-define each of them, especially when the library provides many kinds of different resources instead of just one, and I want to write a separate wrapper for each of them.
  • Why not use std::unique_ptr<R, std::function<void(R*)>> on its own then, without the Resource class? Because I would like to extend std::unique_ptr and provide additional methods, specific to this type of resource.

Now, the above pseudo-implementation has two major problems, which are the main point of my question.

  1. Since base classes are initialised before non-static members, my_allocator is passed to lib_create_resource() uninitialised. This isn't hard to fix, as I can just default-initialise the unique_ptr base and re-assign it in the constructor's body, but I think it's easy to forget about this.
  2. Similarly, during destruction, the non-static members are destroyed before the base classes. First, my_allocator will be destroyed, then ~unique_ptr() will be called, which in turn will call lib_destroy_resource() with my_allocator. But at that point, my_allocator no longer exists.

I haven't been able to come up with any solution for issue number 2. Is there a way to re-design this class so that lib_destroy_resource() doesn't access my_allocator outside its lifetime? Of course, one solution would be to manually call lib_destroy_resource() or std::unique_ptr::reset() at the appropriate times, but automatic resource management with RAII is considered good practice, especially when exceptions are involved. So is there a better way to accomplish this, possibly by implementing my own std::unique_ptr-like class?

mercredi 20 avril 2022

Google Pub/Sub client in 3rd party environment

I am building a backend service that would communicate with clients with pub/sub. However since clients would run in the 3rd party environment I am not sure how to secure it. In controlled environment I would just create a service account but since this is more like a Saas environment I am not sure how many clients there will be (GCP has a limit of 100 SA). What is the best way to handle it?

thanks

Design Pattern for successive method calls with previous method's output as input

Is there an appropriate design pattern for when you have to plug the output of a function as the input to the next one? See the example below

public static void main(String[] args) {
    A a = computeA();
    B b = computeB(a);
    C c = computeC(b);
    D d = computeD(c);
    doSomeWork(d);
}

Python design pattern to validate every user-object interaction

I've been working on a python module that implements a base-class for a text-based board game. The class is meant to be used as a "game engine", to be imported by other python programs that define custom user-interfaces to the game.

Each instance of the class represents a game match, with which one can communicate by calling the appropriate methods, to register players, enter user actions, change game settings, check the state of the game, etc.

The game logic isn't itself troublesome, but in my efforts to make the class robust I find my code is getting extremely cluttered. The problem is making sure that every interaction with the game is valid, both "structurally":

  • that no player is added to the game twice
  • that only commands by users that have joined the game are executed.
  • that only command issued in the appropriate game stage are executed, etc.

and within the context of the game:

  • that the user is referencing valid game-objects
  • that the user had the resources available for such a command, etc.

That is, I need to make sure that both the actions of the interface-program and those of the users are valid. Naively making these consistency checks within my class, however, is hiding the the game-logic, and making developing and parsing the code much more burdensome than it should.

I've thought of wrapping the game-class in a validator class that would make the appropriate checks before running each game-class method, thus decoupling the validating part, but was afraid the separation might not be so clean-cut and the code might get too fragmented.

When dealing with attributes, python has great concepts, such as descriptors and the @property decorators. Is there something analogous for methods?

I'm sure that's a fairly standard problem, what is the design pattern to help alleviate it?

Doubly Connected Graph Interface Desgin

I have a directed graph (DAG) with nodes of several types.
Each type has its own implementation of Ensure Outgoing Link, which adds another node to the list of outgoing nodes.

This graph is doubly connected, so Ensure Outgoing Link also calls Ensure Incoming Link which adds the reverse link so that the graph can be traversed in either direction.

Ensure Incoming Link does not call Ensure Outgoing Link, as that would create a circle. So the problem is trying to stop consumers of the nodes from calling Ensure Incoming Link (as that would create an incomplete graph), while still allowing the implementation of the Node to call it.

Solutions I see are to either:

  1. Have the graph data structure itself do all the link management
  2. Or to have the node cast itself from the interface to the actual class.
  3. Or to comment or name the Ensure Incoming Link so no one else uses it.

The only issue with having the graph object do it is that each type of node has their own slightly different version of adding an outgoing node, so it still needs to have a method which could be called externally if I'm not careful.

What would be a good design here?

mardi 19 avril 2022

Design pattern question: How to route App, Backend and Webhooks

I try to find out how to design my workflows.

Webhooks are quite handy. a.) So I got the idea, to directly create my customers in Stripe. The Rails backend app listens on customer.created webhook and creates the user with the Stripe provided payload.

b.) Regularly I would send form data to the Rails backend, and from there I create the customer at Stripe via their gem/library.

┌─────────────┐    ┌─────────────┐    ┌─────────────┐
│  Frontend   │    │   Backend   │    │  Webhooks   │
│   NextJS    │    │    Rails    │    │   Stripe    │
└─────────────┘    └─────────────┘    └─────────────┘
       │                  ▲                  ▲       
       │      create      │     API call     │       
       └─────user via ────┴─────creates ─────┘       
               form             customer             
                                                     
                      or either                          
                                                     
┌─────────────┐    ┌─────────────┐    ┌─────────────┐
│  Frontend   │    │  Webhooks   │    │   Backend   │
│   NextJS    │    │   Stripe    │    │    Rails    │
└─────────────┘    └─────────────┘    └─────────────┘
       │                  ▲                  ▲       
       │                  │                  │       
       └─────create ──────┴────Then via ─────┘       
            customer           webhook               
           at Stripe            create          

What is the common way? Next->Rails->Stripe?

And why the Next->Stripe->Webhooks->Rails should be avoided?

Pattern for multistep initialization

I'm writing a class that gets initialized in multiple steps. I don't like the resulting code, and I'm looking for a pattern that would make it cleaner.

Here is an example to illustrate what I mean by initialization in multiple steps.

Say I have a class for an object that will read two streams of data: numbers and operations. Before it consumes at least one element of the operations stream it does not know what to do with the numbers so it drops them. Like this:

class OpNumbers:

    def __init__(self):
        self.op = None

    def process_op(self, op):
        self.op = op

    def process_number(self, number):
        if self.op is not None:
            print(self.op(number))
        else:
            print('dropping', number)

I call this multistep initialization, as the class is initialized twice, one time with the __init__ method when creating an object, and second time when I observe the first op.

The part I don't like is this constant checking if self.op is None. In this code it doesn't look too bad, but in my code which is more complex, tracking the state of the second initialization gets really annoying.

lundi 18 avril 2022

Intercept specific function invocations using parent/child classes?

I am writing a framework that allows us to intercept certain function invocations so that we can manage them(enforce timeouts and retries, persist return values etc.). Right now I'm thinking of introducing an abstract class with a method that invokes an abstract method and forcing users/clients to inherit the abstract class and do their function invocations inside the abstract method.

Specifically, I am writing an abstract class Fruit that expects clients to:

  1. Extend Fruit and override the eat_fruit method
  2. Invoke the eat method when actually wanting to use the overriden eat_fruit method

The reason I'm thinking of doing this is that we want to record the number of eat_fruit invocations

class Fruit:
    def eat(self, *args, **kwargs):
        self.eat_fruit(*args, **kwargs)
        
    def eat_fruit(self, *args, **kwargs):
        pass

class Apple(Fruit):
    def eat_fruit(self, *args, **kwargs):
        print("Eat Apple")
        
apple = Apple()
apple.eat()

However, we're running into a problem with extensibility. Let's say the client needs to add input parameter color to Apple's eat_fruit method, they can add it like this:

class Fruit:
    def eat(self, *args, **kwargs):
        self.eat_fruit(*args, **kwargs)
        
    def eat_fruit(self, *args, **kwargs):
        pass

class Apple(Fruit):
    def eat_fruit(self, color, *args, **kwargs):
        print("Eat "+ color + " Apple")
        
apple = Apple()
apple.eat("Red")

This is workable but relies on them knowing that Fruit's eat method passes *args and **kwargs directly to the eat_fruit method, which seems to be an implementation detail in the eat method that may be subject to change.

I'm wondering if there are better ways to accomplish the same thing. In particular,

  1. Is it possible for them to invoke eat_fruit directly(rather than invoking the parent class's eat method and assume that the arguments will be forwarded to eat_fruit)? I can't think of a way for them to do that while still letting us keep track of eat_fruit invocations
  2. Is there a better way(rather than using parent-child classes) to intercept function invocations? The final goal of this framework is to intercept specific function invocations made by clients so that we can manage them(enforce timeouts and retries, persist return values etc.).

Thanks! Jessica

UI Design pattern / technology for monitoring when an async process finishes

My frontend kicks off an async process with an http POST.

By whatever means, (kafka, threading, database insert and another system monitoring the table), some process is completed after an unknown amount of time and finishes in some quantifiable way (you can make a http call and determine if its done or not).

Are there any design patterns/technologies for notifying the frontend without it having to make repeated requests to some service?

Techniques to combine pagination with authorization?

Suppose we have a messaging app with a table messages. This table has an id as a unique key, the source_id, the one who wrote the message, the target_id, the one meant to receive the message and the content, the actual content of the message.

A user is authorized to access a message if certain conditions are met. For example, a user is able to access a message if user.id === source_id or user.id === target_id. But it might be more complex like if the target_id represents a group which the user is member of.

The messages are too many and we want to load 10 at a time, so we also need a pagination system to fetch 10 messages per call.

What are some good practices to combine the authorization part with the pagination ? Should the "authorization layer" be mixed with the "database layer" ?

A generic interface in c++

I am trying to design a broker interface which has a well defined set of APIs - however, the arguments are not well defined. For example,

class broker{
    virtual void add(Type type)=0; // Type is to illustrate that undecided arguemnt
    virtual remove(Type type)=0;
}

My problem is that the argument of the virtual functions depend on the actual implementation. Each implementation would configure a specific configuration class Type.

Ideally:

class ConfigABroker : public broker {
    void add(ConfigA configA) { .... }
    void remove(ConfigA configA) { .... }
}

Where the type ConfigA is a simple C++ class with member variables

I am looking for a c++ design pattern which can overcome this issue. I tried looking into type erasure however it seems I am end up in the same problem again.

dimanche 17 avril 2022

Access "User.Identities" and "HttpContext.Session" out of controller class Net Core C#

I am rewriting a Net Application from dotNet 4 to Core6.

Now I am facing the problem of how to create a user identity class which can be access by the get methods in the Core6 controller class.

AS both "User.Identities" and "HttpContext.Session" depends on controller class and controller is single inheritance(inherits single class).

So i can't move both "User.Identities" and "HttpContext.Session" out of controller class, and the controller class is single inheritance as well.

Are there any ways to access the following method in every controller of my application

Thanks.

   public string getCurrentUser()
    {
                  
        string username="none";
        if 
        {
            username = User.Identities.SingleOrDefault().Name;
        }


        if (HttpContext.Session.GetString("userid") != null)
        {
            username = (HttpContext.Session.GetString("userid"));
        }

        return username;
    }

samedi 16 avril 2022

Are there best practices to write cms sytem for different kind of users?

Currently I am writing content management system where users can be 3 different kind (Admin, Merchant, Merchant created users). I am curious are there some best practices how to layout project, for example write one project for Admins and second one for Merchants or write one project and split logic according to user types.

HP OOP builder pattern use

I'm confused about using PHP Builder pattern in practise. In many documentation they propose using the Builder like this.

require 'Pizza.php';
require 'PizzaBuiler.php';

$piza_builder=(new PizzaBuilder('medium'))
        ->cheeze(true)
        ->bacon(true)
        ->build();

$pizza=new Pizza($piza_builder);

Pizza class use PizzaBuilder as constructor parameter and init class properties from it.

Why not instantiate object directly from Builder ??? is this bad (Anti-Pattern).

require 'Pizza.php';
require 'PizzaBuiler.php';

$piza= Pizza::getBuilder("medium")
        ->cheeze(true)
        ->bacon(true)
        ->build();

The only difference between two implemnettations is to modify build() function in Builder class to return new Pizza Object instead of of returning Builder instance.

can you advice me what clean builder to use ???

I want a class A to use a method of class B, and I want a method of class A be used by class B

I'm doing a little project for school where I try to do a spreadsheet program, and I have two classes, I will be simplifying this with pseudocode a little bit so it's not too messy.

class DocumentController {
    Document doc //This is a class with a CRUD on a document (It haves Sheets and every Sheet haves a Table full of Cells)
    Parser p

    getValueOfCell (sheetName, positionX, positionY) {
         returns value of a cell in a sheet in the position x,y
    }
    
    setCell (String expression, sheetName, positionX, positionY) {
         //Somewhere here we need to use p.evaluate()
    }
 
}

class Parser {
    DocumentController docController;
    evaluate (expression: String) {
        //Somewhere here, I need to use method getCell from Document for evaluating the  expression (The expressions have references to other cells so the Parser need to resolve these references)
        ...
        return value of the expression (float, integer, string, whatever)
    }
}

So apparently my teacher said to me that this is a bad design, because these classes are too coupled and this is a code smell. Can someone explain me why is this so bad? How can I make a better design?

Thank you, sorry if I made some typos or the code is not legible

vendredi 15 avril 2022

Python. Dependency injection for user defined handler

I want to create some sort of dependency injection that I personally calls context_extractor (not sure that this is correct description).

Why: I use python-telegram-bot (PTB). A user defined handler in PTB looks like:

def my_handler(update, context):

Where update - event/update from a server (user messasge for an example), and context is access for all types of persistence data. my_handler expects exactly 2 arguments, this is PTB feature.

For an example access to saved user name looks like: context.user_data['name'] or even context.user_data['user_object'].name. Typing every time this lines is not cool. Using name = context.user_data['name'] - makes many overhead in code.

Here is declaration of user defined handler:

my_handler_cmd = CommandHandler(command=config.my_handler_s, callback=handlers.my_handler)

I want to make handler declaration with an optional dict:

my_handler_cmd = CommandHandler(command=config.my_handler_s, callback=handlers.my_handler, extra_data={'name': 'context.user.name', 'age': "context.user_data['user_object'].name"})

Key is are expecting variable name and value is path to this variable inside a context.

(all keys and values are literals but values can be replaced for an parent objects/classes) in future versions.

At this case a user defined handler becomes like: def my_handler(update, context, name, age: int = '') (recalls a Depends in FastAPI.

My questions:

  1. Is it a good idea and I choosed a good approach for implementation?
  2. Is a similar framework for this goal already exists for no reinventing a wheel?

Wrap object in custom Python class adding extra logic

A Python library provides a function create_object that creates an object of type OriginalClass.

I would like to create my own class so that it takes the output of create_object and adds extra logic (on top of what create_object already does). Also, that new custom object should have all the properties of the base object.

So far I've attempted the following:

class MyClass(OriginalClass):
  
  def __init__(self, *args, **kwargs):
    super(MyClass, self).__init__(args, kwargs)

This does not accomplish what I have in mind. Since the function create_object is not called and the extra logic handled by it not executed.

Also, I do not want to attach the output of create_object to an attribute of MyClass like so self.myobject = create_object(), since I want it to be accessed by just the instantiation of an object of type MyClass.

What would be the best way to achieve that functionality in Python? Does that corresponds to an existing design pattern?

I am new to Python OOP so maybe the description provided is too vague. Please feel free to request in depth description from those vaguely described parts.

How to use BLoC pattern via flutter_bloc library?

I'm writting a small tamagotchi app using Flutter and now I'm learning how to use flutter_bloc lib. When user tap on a pet image on a screen, it must redraw a CircularPercentIndicator widget, but it won't work. I'm trying to connect a view with a bloc using a BlocBuilder and BlocProvider classes, but it did not help. After tapping a pet widget, animation is forwarded, but the state of saturationCount and CircularPercentIndicator hasn't been updated.

Here is my BLoC for pet feeding:

class PetFeedingBloc extends Bloc<SaturationEvent, SaturationState> {
  PetFeedingBloc()
      : super(const SaturationState(saturationCount: 40.0)) {
    on<SaturationSmallIncrementEvent>((event, emit) => state.saturationCount + 15.0);
    on<SaturationBigIncrementEvent>((event, emit) => state.saturationCount + 55.0);
    on<SaturationDecrementEvent>((event, emit) => state.saturationCount - 2.0);
  }
}

In SaturationBarWidget class I'm trying to connect a percent indicator in a widget with a BLoC, but it does not work. Here it is:

class SaturationBarWidget extends StatefulWidget {
  const SaturationBarWidget({Key? key}) : super(key: key);

  @override
  State<SaturationBarWidget> createState() => SaturationBarWidgetState();
}

class SaturationBarWidgetState extends State<SaturationBarWidget> {
  @override
  void initState() {
    Timer? timer;
    timer = Timer.periodic(const Duration(milliseconds: 3000), (_) {
      setState(() {
        context.read<PetFeedingBloc>().add(SaturationDecrementEvent());
        if (context.read<PetFeedingBloc>().state.saturationCount <= 0) {
          timer?.cancel();
        }
      });
    });
    super.initState();
  }

  @override
  Widget build(BuildContext context) {
    return BlocBuilder<PetFeedingBloc, SaturationState>(builder: (context, state){
      return CircularPercentIndicator(
        radius: 50.0,
        lineWidth: 20.0,
        animateFromLastPercent: true,
        percent: context.read<PetFeedingBloc>().state.saturationCount / 100,
        center: const Icon(
          Icons.emoji_emotions_outlined,
          size: 50.0,
        ),
        backgroundColor: Colors.blueGrey,
        progressColor: Colors.blue,
      );
    });
  }
}

And here it is my PetWidget class with image that need to be tapped:

class PetWidget extends StatefulWidget {
  const PetWidget({Key? key}) : super(key: key);

  @override
  State<PetWidget> createState() => PetWidgetState();
}

class PetWidgetState extends State<PetWidget> with TickerProviderStateMixin {
  late Animation<Offset> _animation;
  late AnimationController _animationController;
  static GlobalKey<SaturationBarWidgetState> key = GlobalKey();
  bool reverse = true;
  Image cat = Image.asset('images/cat.png');

  @override
  void initState() {
    super.initState();
    _animationController =
        AnimationController(vsync: this, duration: const Duration(seconds: 4));
    _animation = Tween<Offset>(begin: Offset.zero, end: const Offset(1, 0))
        .animate(CurvedAnimation(
            parent: _animationController, curve: Curves.elasticIn));
    _animationController.addStatusListener((status) {
      if (status == AnimationStatus.completed) {
        _animationController.reverse();
      }
    });
    //_animationController.forward();
  }

  @override
  void dispose() {
    _animationController.dispose();
    super.dispose();
  }

  @override
  Widget build(BuildContext context) {
    return Center(child:
        BlocBuilder<PetFeedingBloc, SaturationState>(builder: (context, state) {
          return Center(
              child: SizedBox(
                width: 300,
                height: 400,
                child: SlideTransition(
                  position: _animation,
                  child: GestureDetector(
                    child: cat,
                    onDoubleTap: () {
                      context.read<PetFeedingBloc>().add(SaturationBigIncrementEvent());
                      _animationController.forward();
                    },
                    onTap: () {
                      context.read<PetFeedingBloc>().add(SaturationSmallIncrementEvent());
                      _animationController.forward();
                    },
                  ),
                ),
              )
          );
        })
    );
  }
}

jeudi 14 avril 2022

Creating a python script for patterns

I would like to ask your help in creating a python code for my data.

I have a list of string "1. AGGCHRUSHCCKSGDSKCGGHCSG" I would like to get all "G" in my string and print a pattern pit like this: GG-G-GG-G and if there are no Gs in my string, it should print "No G found"

I have tried basic string, substring, and print in python, but that's all I got. I'm really new here. I'm sorry and thank you very much.

how to print this alphabetic diamond pattern java

i want to print the following pattern in java. can someone help me?

      A
     ABA
    ABCBA
   ABCDCBA
  ABCDEDCBA
 ABCDEFEDCBA
ABCDEFGFEDCBA
 ABCDEFEDCBA
  ABCDEDCBA
   ABCDCBA
    ABCBA
     ABA
      A

Restrict the instantiation of any concrete products only to its factory

The situation may be well known.

We do not want to enable certain instantiations of objects to be made by everyone, we want to delegate it to the factory, and ONLY the one and ONLY factory.

The factory's instantiation is allowed by everyone. And also its ONE and ONLY, single method, e.g. "produce".

It is like BOPF or handlerclasses in a custo table.... no one else shall be able to instantiate those products.

I would like to do that with the nice formular driven design in ABAP SE24, and use as less code as pussyble.

I managed to find a way.

Prosa-Pseudocode:

We have a factory class, which the "usual" user can instantiate.

Then the usual user ( client) can call lr_factory->produce( "the_class_name_of_the_concrete_product" ).

INFO: This classname maybe hardcoded or determined by "you know what I mean".

Then "produce" creates a "mediator" object...

Then the mediator calls its method "produce_authorized("passing the class name");

In the concrete product "ZCL_PRODUCT_A" the class of the mediator object is assigned as friend.

Furthermore, the "ZCL_PRODUCT_A" is flagged as create private, and also its CTOR.

This works, and the mediator object can call "ro_instance = new ("passed_concrete_class_name").

However...

What I want it to look like, is:

The mediator is ( actually it already is) a superclass of the concrete product, to make it easier for the user to just "inherit its new concrete product" from the mediator superclass, instead of needing to copy any already delivered concrete product classes.

And what I do not get is:

CREATE PROTECTED.

My superclass could create its own children ( if this was enabled ), BUT the way is exactly 180 degrees the opposite.

So, btw, arguing about, why any subclass should be able to instantiate its parent, wether the opposite way, I would like to know:

Did I maybe miss something in the SE24 designer?

INFO: interface designs not that important right now, UNLESS You blast some real cool "overengineered" idea. I'm fine with that.

erlang maps key pattern match

I am newer to erlang, I have a map like #{"a/.+":"v1", "b/c/.+": "v2"}

I want to get value by input key like "a/d" to match "a/.+" and get "v1".

It easy to pattern when key is exactly and input key is regex, how can I implement it.

mercredi 13 avril 2022

Nested modal service dependencies not correctly resolved at runtime

I have a decorated modal service that basically logs some information before opening a new window. Think...

export class OriginalModalClass {
  public open(component: any): ModalInstance {
    // Interesting stuff here
  }
}

@Injectable({ providedIn: 'root' })
export class DecoratedModalClass extends OriginalModalClass {
  constructor(private logger: Logger) {}
  public open(component: any): ModalInstance {
    logger.log(component.name);
    super.open(component);
  }
}

This app is not well optimized and does not load any modules lazily, and so every component service, etc is in one large module. This logging seems to always work when opening the first window but if that same service is a dependency of the first modal which opens another modal window the service dependency is not substituted and so nothing is logged. Do I need to explicitly specify it as a provider for these cases? I originally looked into using forRoot() but this seems to be a solution designed for lazy loaded modules which we currently don't use.

@NgModule({
  exports: [...],
  imports: [...],
  providers: [
    { provide: OriginalModalClass, useFactory: (decoratedModalClass) => decoratedModalClass, deps: [DecoratedModalClass] }
  ]
}) AppModule {}

How to listen the Change of the HashMap's specific key's Value in Java

I have to wait till the HashMap key's value change from another thread and have to continue the request processing after that.

Should I avoid dynamic_cast or typeid to separate processing steps in this design?

Modules in the program communicate with each other by using Message.

Sometimes, one object receives two different types of messages from two different objects.

The communication happens by having an object put messages into queue at other side of objects (maybe through mediator).

To clarify example:

class Message {};
class MsgTypeA : public Message {};
class MsgTypeB : public Message {};
class MsgTypeC : public Message {};
...

// Other modules derived from Module override accept_msg(), is_full(), and run()

class ModuleA : public Module {
public:
  // Subclasses override accept_msg to accept specific type of messages
  virtual bool accept_msg(Message* msg) {
    // The problem (usage of dynamic_cast)
    if (dynamic_cast<MsgTypeA*>(msg)) { // Module only accepts certain concrete class(es)
       // The problem (usage of typeid)
       queue[typeid(msg)].push_back(msg);
    }
  }
  virtual bool is_full(std::type_index x) {
    // The problem
    return queue[x].full(); // assume std::vector supports full()
  }
  virtual void run() {
     if (queue[something].empty()) {
       // handler for the message type
     }
     if (queue[something].empty()) {
       // handler for the message type
     }
  ...
private:
  // The module can have multiple queues 
  // because a number of modules can send messages to the specific module
  // using different types of messages.
  std::map<std::type_index, std::vector<Message*>> queue;
};

class ModuleB : public Module {
public:
  virtual bool accept_msg(Message* msg) {
     // The problem (usage of dynamic_cast)
     if (dynamic_cast<MsgTypeB*>(msg)) {
       ..
     }
   }
   ...
};

This is similar to acyclic visitor pattern? where each visitor (message) needs to be accepted by module.0

I think this is not kind of if...else... stupid and wonder whether this design is reasonable to use typeid and dynamic_cast.

If it is avoidable and undesirable, what is the reasonable ways to achieve this kind of situation?

mardi 12 avril 2022

Design Pattern- Modelling

I have a library with an Interface called Contract it has a function called verify() this is an abstract method. The user of this library implements the interface and its own version of verify() method. The verify method in the implement class is called by the library internally.

I want to create my own version of this library. So that there are some basic validations already in place and the verify function also gets called by the library. How do I go about doing this? Something I'm fiddling with below.

Class NewContract implements Contract{ 
bool verify() {
// basic validation rules
return newVerify();
}; 

abstract bool newVerify(); // this part to be implemented by the users.
}

Is there a better way to accomplish this? Because the library uses reflection and will load the class that implements Contract i.e. my implemented version = NewContract and not the class that the user will implement.

Is there a design pattern to avoid properties not set error in class

I have some legacy code I am trying to improve somewhat.

I have a Proxy class that accepts an apiKey. My application is always aware of this key. Therefore I can start with:

$proxy = new Proxy("api-key");

Next, depending on the shop, I need to find their credentials - and pass that in to the proxy class:

$proxy->setShop($shop); // $shop will be determined at runtime

Then I can make a request like:

$proxy->getItems()

However, if I have not setShop, then internally the code will fail. But, it cannot go in the construct because shop is not known yet.

I could delay construct to the point where I know the shop but that would involve passing around the apiKey which I would rather not do. I could create an intermediary class that creates the Proxy class for me, like a factory? And call that factory from the controller when the shop is known. I don't have a very strong opinion on if that is a good approach or not.

Is there a particular pattern here that I could use? I could just thrown an Exception if the shop isn't set with a clear message but that feels defeatist - but maybe I am wrong and maybe that is exactly the right thing to do.

So is there a particular design pattern to use in these situations?

How to elegantly check null check for java method parameter?

How to elegantly check null check for java method parameter?

I have code like this:

    public void updateAccount(String a, int b, double c, float d, List e, Set f, Map g, Collection h, Enum i ...String) {
        if(a != null && !a.isBlank()) {
            this.a = a;
        }
        if(b != null && !b.isBlank()) {
            this.b = b;
        }
        if(c != null && !c.isBlank()) {
            this.c = c;
        }
        if(d != null && !d.isBlank()) {
            this.d = d;
        }
        if(e != null && !e.isBlank()) {
            this.e = e;
        }
        if(f != null && !f.isBlank()) {
            this.f = f;
        }
        if(g != null && !g.isBlank()) {
            this.g = g;
        }
        if(h != null && !h.isBlank()) {
            this.h = h;
        }if(i != null && !i.isBlank()) {
            this.i = i;
        }
        ....
    }

All parameters are checked for null, and if not null, the value of the corresponding field is changed.

I feel that the above method is too hard-coded.

I'm wondering how I can turn this into more efficient code.

Best Regards!

lundi 11 avril 2022

Should an inter-layer interface "leak" data structures/enums from the higher layer?

When separating a program's design into layers, e.g. a model layer and a UI layer, one ideally builds a pad of interfaces between the two, so that communication is abstract (i.e. Dependency Inversion Principle). However, when doing that, there invariably comes the question of what data structures one should pass as function arguments across that layer boundary - for example, the UI layer may declare a form that collects some information, say

struct Login {
     string userName;
     string password;
}

but it could also be more complicated, like an enum of choices, e.g.

enum WifiProtocol {
     PROTOCOL_WEP,
     PROTOCOL_WPA2,
     PROTOCOL_OTHER
}

but the trick is that, because the way I understand it the Dependency Inversion Principle both says that not only do both layers communicate via said interface "padding", it also is "owned" by the higher-level layer in that changes in the lower-level one should not force a change in that interface. And if we imagine that the UI layer sits on top, with the model layer below, then the UI layer would provide an abstract set of interfaces giving different services that the user should be able to interact with, e.g.

interface ILoginService {
     void login(Login login);
}

or

interface ISetWifiSecurityService {
     void setWifiSecurity(WifiProtocol wifiProtocol);
}

but type Login and WifiProtocol is defined in the UI layer, so something in the model layer will now "see" that one. Now, there's typically a buffering, dedicated service layer inside the model before you get to the "domain core", and that "core" is supposed to be "free and pure and independent" of all UI, persistence concerns. Yet then what to do with, say, that enum (it's reasonable you could split up the Login into function parameters but the enum will need mapping, and those mapping rules need categorization as to where they go). If anything in that "core" gets on it, say to map those protocols to objects, would that constitute an objectionable "intrusion/leakage of UI concerns into the model"?

Does this mean we should "fatten" the application service sublayer of the model a bit to take on the extra responsibility of doing that mapping itself so the domain sublayer is free and clear of that UI layer-defined enum? We also don't want to duplicate that enum, either, as duplicated code that is at all liable to change really hurts the maintainability.

What's the best practice for reusing functions on different models in NestJS?

I have declared some functions for a MongoDB collection Model A, now I want to create another collection Model B and most of the functions in Model A can be reused for Model B. Currently, I fall into a situation where I have to repeat myself recreating these functions just for a different model. What's the best practice to reuse these functions on a different model (How to improve my code)? I am using NestJS framework. Example code that I have:

export class Service {
  constructor (
    @inject('ModelA')
    private modelA: Model<IModelA>,
    @inject('ModelB')
    private modelB: Model<IModelB>,
  ){}
  
 public async findModelA() {
   return await this.modelA.find().exec();
 }

 // How to avoid repeating creating the same functions here?

 public async findModelB() {
   return await this.modelB.find().exect();
 }
}

The repeated functions which are similar to the above example are all over the place. So I really want to get rid of this pain. Thank you in advance for the help!

How to correctly design a debounce circuit for a push button counter. Button to be used as a lap enable for a stopwatch lap ROM

Before I get started, I just want to say this is just the design aspect. So far, no code has been written for this aspect of my project.

I just designed a lap function for a stopwatch, which functions essentially as a ROM. To do this i want to make sure that once I press a push button (up to 4 times) a lap will be stored. Furthermore, I plan to execute this by using a 2 bit counter, where the push button will be count up to 4 cycles in 2 bits. From here I want the 2 bit number to be used as input to a 2-4 decoder. The 4 outputs will be used as the enables to 4 different registers. These registers are the 4 laps used. Inputs of registers is the current count of the stopwatch These are all connected to a 4-1 MUX. I want to use a lap select (2 bits) as select lines to a MUX to push through the targeted lap time. This will eventually be displayed on the 7 segment display. (No issues with this display aspect I designed)

Currently, I am worried about how to denounce the push button since the clock is at around 200MHZ for my board. How should I do this? Sould I build something resembling a shift register with the push button as the input, and pass all the delays/signals through an and gate? I'm also worried about repeated cycling due to the length the button is pressed. Should I also couple this with a clock divider so I can slow down the clock to the register of the counter? I heard something about clock dividers being innacurate with a high degree of uncertainty when using high-speed clocks as an input, while in conversation with my professor.

This is what I have designed for the counter

Any help is appreciated, thank you.

Implementing strategy when every strategy need different params

I have method that do 3 different thing based on user type, so i thought I can split it into strategies with factory that will return desired strategy based on user.type. So I will get:

strategy_interface.execute(product_id, user_id);

and then

strategy = strategy_factory.create(user.type);
strategy.execute(...);

but in reality method "execute" needs slighty different params for every user type:

if user.type is 1 then it needs only product_id
if user.type is 2 or 3 or 5 then it needs product_id and user_id
else it should only throw illegal_action_exception

I like my approach as it is easy to test. I have to check only instance type returned by factory but I have problem with that every returned strategy works differently. Maybe instead of universal factory I should do something like this:

if user.type is 1:
    strategy = new first_strategy();
    strategy.execute(...);
else if user.type is in(2, 3, 5):
    strategy = new second_strategy();
    strategy.execute(...);
throw new illegal_action_exception();

dimanche 10 avril 2022

How to enforce default behavior in a virtual function

For example, say I have a basic data object class as below.

class DataObject {
   protected:
      bool data_changed;
      virtual void save() {}
      virtual void load() {}
   public:
      virtual void idle() { 
          if (data_changed) {
              save();
              data_changed = false;
          }
      }
};

The idea is that "idle" is called periodically from some main looping thread and performs non-critical updates.

Now I want derived classes to be able to have their own idle functions. But I don't want to lose the default behavior.

One solution is to say "remember to call DataObject::idle() from overridden idle() functions".

Like this:

class ChildData : public DataObject {
   public:
      virtual void idle() override {
          //do something
          
          DataObject::idle(); //remember to call parent idle!
      }
};

But this is very dangerous as people can just forget.

Is there a way to enforce this somehow? Or make it automatic, like a virtual destructor?

(My current "workaround" is to have 2 functions, one the parent_idle that does the important stuff, and then one overridable child_idle that derived functions can override. But this is a bit messy, and also you have to make a whole new set of functions again if you want some child function to enforce its own default...)

LCOM is always 1 in JArchitect and Metrics Reloaded

So, I've been studying design patterns and in the context of the Single Responsibility Principle I tried to calculate the Lack of Cohesion of methods (LCOM) in Java using Metrics Reloaded and JArchitect. Both programs always calculate LCOM to be 1 although in some cases it's clearly not. Even the below standard example of low cohesion has an LCOM of 1 in these programs:

package com.StyleM;

public class NumberManipulator {
    private int number;

    public int numberValue() {
        return number;
    }
    public void addOne() {
        number++;
    }
    public void subtractOne() {
        number--;
    }
}

To my understanding the LCOM in this example should be 1-(3/4) = 0.25, because there are in total 4 methods (including the constructor) and 3 of them use the number field. What am I doing wrong?

samedi 9 avril 2022

How to convert Android class to Singleton object (Kotlin)

Currently, I have a database manager class that handles all operations to the database like this:

class DatabaseManager(val context: Context) {
    private val db = Firebase.firestore
    //Other functions, etc.
}

It makes use of the context passed in by different activities to perform functions to the database. The thing is, every single activity that requires database functions have to instantiate this manager class first, then call the functions. I would like to make use of the Singelton design pattern to make it such that all the activities will only use a single instance of the class. I believe kotlin's objects can do this, however I also need to be able to pass in the context of the activities into this manager class. Any assistance is appreciated, thank you!

Transactions for long running multi-task activities on a message broker

I like to hand off the an order fulfillment task to a message broker but ensure that the fulfillment is transaction controlled on MongoDB using python. That is to say:

  • User should get fast feedback after placing the order
  • Fulfillment should run offline for a few minutes because multiple tasks need to complete
  • If fulfillment fails no order should be billed to user
  • Reduce DB writes for the order receive procedure

What is the best approach for such a scenario?

Option 1: Single transaction to either insert or not insert order

Receive API request
START transaction
Insert new order to Orders
Send task to message broker
Send API order received, fulfillment started
Wait for message broker to complete task
Send API fulfillment completed
END transaction 
Bill all orders of the day

Option 2: Two transactions using a dedicated order state field

Receive API request
START transaction
Insert order with status received
Send API order received
END transaction

START transaction
Send task to message broker
Send API fulfillment started
Wait for message broker to complete task
Update order with status fulfilled
Send API fulfillment completed
END transaction 
Bill all orders of the day in status completed

Or perhaps there is an entirely different approach?

What is the best way to design model in given context?

I'm stuck on this problem. Basically i've got a data structure like this:

                "Manufacturer": [
                    {
                        "model": "Model",
                        "models": [
                            {
                                "color": "black",
                                "condition": "C",
                                "memory": "32 GB",
                                "price": 9999999
                            },
                            {
                                "color": "black",
                                "condition": "D",
                                "memory": "32 GB",
                                "price": 999999
                            },
                            {
                                "color": "black",
                                "condition": "NC",
                                "memory": "32 GB",
                                "price": 99999
                            }
                        ]
                    }
                ]
            }

I want to make a view where there'll be a droplist with "Manufacturer" list, a droplist for all models of chosen manufacturer, and a droplists for condition and memory capacity. So i'm stuck with desingning a model that I need to pass to the view. Any suggestions on how should I design it and what are the best practicies?

React Hooks - distribution of responsibilities (GRASP)

Introduction

I am implementing the analytics of my react app.

This is the structure of my project:

/
  components/
    Profile/
      EditProfileForm.js

  screens/
    Profile/
      EditProfile.js

  services/
    api/
      profile/
        editProfile.js

  hooks/
    profile/
      useEditProfile.js

The folder 'services/api' contains the code that interfaces with the server, that is, the code that calls my endpoints, parses the responses of my microservices... I mean, the interface/gateway.

I have created UI and Business Logic Hooks... this is an extra layer for cleaning the code of my components, or for executing certain business logic like if the user isn't premium, don't let him edit his profile.

Problem

Now, I am about to track the events of my app... I need to place a code, somewhere, for tracking the profile editions (and the respective errors).

I have decided to create helper methods inside my hooks... but maybe, they should be placed in the api methods...

What do you think? Following the low coupling, high cohesion, and controller principles of GRASP, where should I place this code?

import analytics from  "../../services/api/analytics";

const trackProfileEditionError = (err) => {
   analytics.eventsLogger.EXCEPTION({
     description: err.message ?? "Error editing profile"
     fatal: false,
   });
}

My current solution

I see some of my business logic hooks as controllers (intermediaries between my interface and the algorithm that implements it).

In order to decouple the tracking part from the call to my edit-profile endpoint, I have decided to place the code as a helper of my custom hook. Like this:

/
  components/
    Profile/
      EditProfileForm.jsx

  screens/
    Profile/
      EditProfile.jsx

  services/
    api/
      profile/
        editProfile.js

  hooks/
    profile/
      helpers/
        analytics/
          trackProfileEditionError.js

      useEditProfile.jsx // calls services/api/profile/editProfile.js

Note: I know what a custom hook is... in the code of my custom hooks I have native hooks (useRef, useEffect, useCallback...), so they are not helper/utility functions.

How to make the line which is in skillshare?

example 1

example 2

How to make these kind of lines?

Bridge between incompatible implementations in typescript

I'm trying to create a bridge between two services with an incompatible interface.

ServiceType is an argument from the user, so it has to be validated during runtime.

The best I could think of is this. But there are several problems. I will have to create a similar method like find on the SericeByType object for each function on CampaignService | LeadService and the result from that function is a union of CampaignModel | LeadModel.

Is there a better way how to do it?

interface CampaignModel {
  id: number
}

class CampaignService {
  public async find(startId: number, limit: number): Promise<CampaignModel[]> {
    return [];
  }
}

interface LeadModel {
  id: string
}

class LeadService {
  public async find(startId: string, limit: number): Promise<LeadModel[]> {
    return [];
  }
}

type ServiceType = 'Campaign' | 'Lead'

class SericeByType {
  private readonly campaignService: CampaignService = new CampaignService();
  private readonly leadService: LeadService = new LeadService();

  public async find(type: ServiceType, startId: number | string, limit: number) {
    if (this.isCampaign(type)) {
      return this.campaignService.find(startId as number, limit);
    } else if (this.isLead(type)) {
      return this.leadService.find(startId as string, limit);
    } else {
      throw new Error();
    }
  }

  private isCampaign(type: ServiceType) {
    return type === 'Campaign';
  }

  private isLead(type: ServiceType) {
    return type === 'Lead';
  }
}

const service = new SericeByType();
service.find('Campaign', 16, 10);
service.find('Lead', 'xxxawf', 10);

Playground

React - Passing data from child to parent component

I'm building a React app with class components and I'm really stuck on one problem. I made an app that initially renders a search form, fetches some data based on what the user typed in the form, and displays them in a table. Everything works fine and I'm happy. The problem comes when I introduce a footer; I want, for style purposes, make the footer always stay at the bottom of the page. At the start, there's only the search form and there's no table; if I set the CSS style of the footer to "position: sticky" there's no way to make it stay at the bottom. It needs to be in fixed position. But when the table renders, the fixed position does not work because the table might be very long and gets rendered over the footer. So I need to change the CSS of the footer dynamically: the solution I came up with is that I should create two styles in the CSS file for the same footer, one with position fixed and the other is the same but with the sticky position. I would then need to change the id of the footer before and after the table is rendered (to trigger the right "position"), and I'm capable of doing that in React. Basically, I have to tell React: "when the table is not rendered in the app, set footer's id to "fixed" in order to trigger the fixed position CSS style. When the user fetches the data, the table is rendered and the footer's id must then be changed to "sticky". The only problem is: I don't know how to do it in the hierarchy of my app's components. As of now, the footer is rendered in the App.js component, which is:

App.js

import './App.css';
import { ButtonFetcher } from './components/ButtonFetcher';  // tutte importazioni non default, quindi 
import { MapEntry } from './components/MapEntry';  // vanno fatte tutte con il nome classe tra parentesi {}.
import { Header } from './components/Header';
import { ButtonMatches } from './components/ButtonMatches';
import { CustomFooter } from './components/CustomFooter';

function App() {
  return (
    <div className="App">
      <Header />
      <ButtonMatches />
      <CustomFooter />
    </div>
  );
}

export default App;

As you can see, App.js renders a header, the search form (ButtonMatches) and the CustomFooter. Now, inside ButtonMatches I implemented the fetch() logic, and the component renders the MatchTable component, which is null if the data is non-existent (user didn't do the search), and it's a full html table if the user did do the search.

ButtonMatches.js

// (some code ...)

render() 
    {       
        // (some code ...) 
        
        return (
                <div>
                  <div>
                    <form className="PlayerSearch" onSubmit={(event) => { event.preventDefault(); } }>
                        <input type="search" id="search-recent-matches" name="Gamertag" placeholder="Enter a Gamertag..."></input>
                        <button type="submit" onClick={ this.onClickNewSearch } >Search</button>
                    </form>
                  </div>
                  { WLKDcounter }
                  <MatchTable recentMatches={ this.data }/>
                  { more }
                </div> 
            );    
    }

So, now the problem is: how do I tell CustomFooter that the table is rendered? Right now, the CustomFooter is rendered in the App component, which is the parent component of ButtonMatches, which is the parent component of MatchTable. So basically, the MatchTable component must be able to say to its parent component ButtonMatches "Hey! The table is not null and it's rendered!"; then the ButtonMatches component must say to its parent component App "Hey! The table is not null and it's rendered!". Then the App component must change the props of the CustomFooter component in order to change its state and therefore making him change its id and style.

The very first problem is that sending data from child to parent it's a React anti-pattern, and I really don't know how to avoid this. Is rendering the footer into the MatchTable component the only solution, to comply to the unidirectional data flow of React? Conceptually, it looks so ugly to me that the table component renders the footer component. I also would need the very same footer in other different pages of my website.

The second problem is that if I leave the footer inside the App component, and find a way (probably, through function calls) to notify the parents of the table that it's rendered, I would fall in something like "reverse prop drilling" since ButtonMatches would get a useless info from its child MatchTable that it only needs to pass to its parent App component. Since I don't plan of using hooks right now (useContext would be helpful if the prop drilling was not "reversed"), I would break a different React design pattern.

I can't find any solution to this, and my web designer friend already tried everything she could to make the style change, but to no avail. She told me that she absolutely needs the "position" attribute of the footer to change before and after the table is rendered in order to always make the footer stay at the bottom of the page in both situations.