mardi 19 décembre 2017

observer Pattern Lemmings

I'm doing a game of Lemmings. I have a Lemming class (Position position, Game game)

the lemmings move but are not notified of the position changes of other lemmings. I have to make an Observer pattern, I have to implement it myself. who is the observer, who is observable?

Which solution do you propose to me?

off-topic: how to convince my boss to change a proc with code?

I'm doing some changes on an Oracle procedure that has 314 IF()'s (yes, 314!!!). I know that it could be replaced with some code OOO and applying some design pattern (like Strategy or factory).

All IFs are similar to this:

IF(blahblahblah) THEN
ELSEIF(blehblehbleh) THEN
ELSEIF(blihblihblih) THEN
ELSEIF(blohblohbloh) THEN
ELSEIF(bluhbluhbluh) THEN
....
-- And so on through 314 logical tests

The problem is the code is stable, is in production, the team is not aware of these techniques like design patterns, etc.

What more reasons I could enlist beyond performance and easy code reading/undesrtanding and maintenance?

Thanks in advance

SPA Navigation and workflow patterns

Given a complex object e.g.:

public class RootObject{ IEnumerable<IParticipant> ParticipantCollection {get;set;} }

Where implementations of IParticipant are complex objects that require a several screens in a SPA application to populate. And, given 2 existing workflows (where a workflow is a series of steps or screens required to populate the object) in a 7 step wizard that are currently somewhat hard coded with switch statements. Is there an existing spa/wizard framework that would make this more manageable?

I think I would like each implementation of an IParticipant to subscribe to a workflow and inject itself into the overarching workflow of the application. So, the overarching/meta workflow of the application would consist of 7 steps, with each step having sub steps...or extension points that IParticipants subscribe to. This would allow concrete IParticipant step workflows to exist on their own and allow the application to grow in a more manageable way.

Design pattern for transformer involving third party library

Suppose there is a third party library containing base class Transformer and concrete implementations TransformerA and TransformerB.

I need to write parallel classes for TransformerA and TransformerB outputting class say TransformerNew

public class TransformerAConverter {
  public TransformerNew convert(TransformerA transformerA) {
    // conversion logic
  }
}

public class TransformerBConverter {
  public TransformerNew convert(TransformerB transformerB) {
    // conversion logic
  }
}

I need to write following function:

public TransformerNew[] process(Transformer[] transformers) {
}

How can I achieve this without instanceof or explicit type casting. I have tried using visitor pattern but unable to express it.

Design pattern for casting rows of Object table

I have an issue with a project. I would like to know if there is a Design Pattern for this case:

I'm retrieving from database a row of objects with different types of data

Object[] userInformationsRow = getUserInformationsFromDataBase();
int idColumn = 0;
int nameColumn = 1;
int birthDateColumn = 2;
// 
Integer idUser = (Integer)userInformationsRow[idColumn];
String nameUser = (String)userInformationsRow[nameColumn];
Date birthDateUser = (Date)userInformationsRow[birthDateColumn];

There is an another way to develop that kind of code which seems to me verry long to code and difficult to change in the future.

DTO conveter pattern in Spring Boot

The main question is how to convert DTOs to entities and entities to Dtos without breaking SOLID principles.
For example we have such json:

{ id: 1,
  name: "user", 
  role: "manager" 
} 

DTO is:

public class UserDto {
 private Long id;
 private String name;
 private String roleName;
}

And entities are:

public class UserEntity {
  private Long id;
  private String name;
  private Role role
} 
public class RoleEntity {
  private Long id;
  private String roleName;
}

And there is usefull Java 8 DTO conveter pattern.

But in their example there is no One To many relations. In order to create UserEntity I need get Role by roleName using dao layer (service layer). Can I inject UserRepository (UserService) to conveter. Because it seems that converter component will break SRP principle, it must convert only, do not know about services or repository.

Converter example:

@Component
public class UserConverterImpl implements UserConverter<UserEntity, UserDto> {
   @Autowired
   private RoleRepository roleRepository;    

   @Override
   public UserEntity createFrom(final UserDto dto) {
       UserEntity userEntity = new UserEntity();
       Role role = roleRepository.findByRoleName(dto.getRoleName());
       userEntity.setName(dto.getName());
       userEntity.setRole(role);
       return updateEntity(new Account(), dto);
   }

   ....

Is it good to use repository in the conveter class? Or should I create another service/component that will be responsible for creating entities from DTOs (like UserFactory)?

Implementation Design: Synchronised objects between multiple machines

We implemented a synchronization of a users data-objects between all of a users machines by uploading it to our database server. The object can be modified on the server and on the users machines itself and will be downloaded automatically onto all machines the users logs in on.

Concerning data-updates of the object, we apply the rule "first come, first served", which is important as changes can be made without server connection and then are uploaded once connection is re-established. Those scenarios are very unlikely anyway, so we can deal with those.

However, a problem for us is deleting the object. When the user decides to delete the object from any machine (from the local app or the web-end), how do we notify all the machines holding this object that the object is deleted?

Each object holds an unique ID. Normally we could just figure out whether the ID of any given object on a machine is existing in the users server-space, if not, delete it locally. However, we eventually want to allow manual sharing of those objects, meaning users should be able to upload the folder in which those objects are stored on their local disk and upload them, for any other user usable right away, by putting it in the right directory in their app-folder. Meaning, if an ID is not represented on the server, we do not delete it but upload it (as we have completely autonomous syncing). Those objects are not actually shared but mere copies, meaning that when I share an object manually, upon upload the server will assign this object a new, unique ID, as the objects will be stored in user relative, private web-space.

Now the question still is, how can we notify all other machines that an object is no longer existing on the server, and we do not want to upload it again (as the application currently would do). One idea was to keep an ever growing list of ID's that have been deleted on the server and first check if the ID is included in there, but this just sounds so wrong and un-clean to me.

I would appreciate any ideas and thoughts about this. Would it make sense to disable manual sharing and implement some sort of link-share system where you can generate a link, or some sort of code / share-ID for any object, with which other users can add other peoples objects to their accounts - which then would allow to use the missing ID as an indicator to delete the object locally - or do you see any more elegant solution?

Thanks in advance.