programming

getting cidr from the ipaddress / subnet

I have been working on the hiera data generation / validation process. for generation of certain detail i needed both the ip address in the usual 1.1.1.1 way and cidr way. I had initially asked for both as input, when the user’s came back saying what if i give the subnet information along with the ip address like 1.1.1.1/9 and you can calculate the CIDR information. Sounds right.

I had initially asked for that from the user because i did not know how to calculate thing. But since the user has asked it to be that way, i started to look into how to arrive that value. I had the first clue with me, the user had told to look in to XOR based calculation. I started with a generic search to first understand what it and how to arrive it.

May be my Search fu is not very good or whatever, my search always lead to getting ip ranges / calculating the subnet mask with CIDR, and not how to arrive at the CIDR.

There were a few calculators which gave what i wanted, but did not explain how to arrive at it. but playing around with them and seeing the outputs, i could sense a sort of pattern emerging, then thought i have seen something like this before. Then i remembered that there was a third party package which some other application was using at work. i searched for it and lo and behold python3 has that function built in. not sure if python2 had it or not.

Python3 has this module ipaddress which can be used for ip address related things. so for my use case all i had to do is call the appropriate function and give the ipaddress/subnet information and get the output. I had to set strict argument to False. so that i would take ip address also as part of input instead of proper cidr.

>>> import ipaddress
>>> network = ipaddress.IPv4Network("10.25.123.20/20", strict=False)
>>> str(network)
'10.25.112.0/20'
>>> 

Office, programming

Testing hiera data

I have been looking around for ways to validate puppet hiera data files.

my requirements were

  • check if the yaml is well formed.
  • check if all required settings are provided in the yamls
  • check if the required settings values are of expected format
  • inform the how a value is being considered when the setting is present in multiple files in hierarchy.

The basic idea that i had arrived at was to use json schema based validation. Basically create schemas for the various files in the Hiera Hierarchy, and validate each file against its own schema.

I already had a toy project which does that, which i made sometime last year. But, thought would research a little bit and do things a little properly.

I was curious to see if this problem hasn’t been faced by others or not. Searching around the internet found that puppet has a command which sort of does this. it’s not exactly a validation of the data, but what would hiera return when the setting is being requested.

puppet lookup command

That still wouldn’t help in our case. we can see what setting would be applied, but if the engineer has filled it in the correct format or not, if this is a boolean or string, those cannot be validated. also we can lookup only one setting at a time. But it helps in seeing how puppet arrives at the value with the helpful “explain” option.

on further searching i found that the exact same idea has been thought off and they also referred to tools which would help to achieve that.

https://logicminds.github.io/blog/2016-01-16-testing-hiera-data/

The post describes about puppet-retrospec and kwalify to achieve that. but those didn’t work out much, seems they are for older version of puppet.

for ensuring if the yaml is right or not, thought there should be something like pylint or jslint, and i was right, a simple search pointed me to something called yamllint. Which also had a python interface with which it can be included into the our scripts. a simple example of it would be like

from yamllint import linter
from yamllint.config import YamlLintConfig

conf = YamlLintConfig('extends: default')

filepath = "./test.yaml"
with open(filepath) as yamlfile:
    yaml = yamlfile.read()
problems = linter.run(yaml, conf, filepath)
for problem in problems:
    print(f"{problem.line}:{problem.column} \t {problem.level} \t {problem.desc} ({problem.rule})")

which produces

$ python testyamllint.py 
1:1      warning         missing document start "---" (document-start)
3:1      error   duplication of key "a" in mapping (key-duplicates)
3:7      error   no new line character at the end of file (new-line-at-end-of-file)

for a sample yaml

a: "b"
b: "c"
a: "d"

let’s see how it goes.

programming

Sorting Coursera Course List by duration

With the lockdown going on, was too bored. Work from Home, playing with kids and playing Injustice, have sort of become a routine. The first weeks on lockdown I spent the spare time watching Arrow on Airtel Xstream. which suddenly disappeared from the platform. didn’t know what else to do, Coursera announced a free courses saw the cloud computing 101 and thought would refresh my memory on the subject. It was 3 weeks long. With byte sized videos and mostly theory i sort of finished it in 1.5 – 2 weeks. So, thought would look into some other short duration courses.

Coursera page has a categories list and once you select a category from the filters you can select duration and get a list of courses within the selected duration. But then you cannot sort by duration.

I first looked to see if Coursera offered any APIs. They do provide but you have to join their affiliate program. I just want to sort by duration. So this will not be helpful for us.

So I decided to see what happens behind the scenes when I set a filter and then use that to get the data and sort it separately.

So what happens behind the scenes

When checking with dev tools, i observed the following. When we select a filter and click on Apply Filters, a POST request is sent to this URL

https://www.coursera.org/graphqlBatch?opname=catalogResultQuery

that doesn’t seem right, right? the filter information is not there. That information is being sent as post data. Checking the payload, there was this huge JSON data which was being sent. It had all the information, a huge query which coursera backend uses to fetch the required data i guess. but the thing which we are interested in is the variables object

{
  "limit":30,
  "facets":[
    "skillNameMultiTag",
    "jobTitleMultiTag",
    "difficultyLevelTag",
    "languages",
    "productDurationEnum:1-4 Weeks",
    "entityTypeTag",
    "partnerMultiTag",
    "categoryMultiTag:information-technology",
    "subcategoryMultiTag"
  ],
  "sortField":"",
  "start":"0",
  "skip":false
}

the information that we would have to manipulate are limit, start and the value “categoryMultiTag:information-technology” inside the facets list. the limit and start are used to control pagination, and the number of courses to return in a single request. and categoryMultiTag is the course category that we are interested in. Making a request to this URL we get a JSON response, which has the courses information and the pagination.

The pagination details are found under [“data”][“CatalogResultsV2Resource”][“browseV2”][“paging”] which has total number of courses and the next request start value.

The courses list is under [“data”][“CatalogResultsV2Resource”][“browseV2”][“elements”][0][“courses”][“elements”].

Now that we have the required information of where to get the information, what to send and what to expect in response, I wrote a script to recursively get all courses of a category and duration and dump it to csv.

you can find the entire script here https://gist.github.com/anabarasan/e6b5b6842e97592ec1eaffbc30ce703e

Office

GlassFish v2.1.1 & Custom Error Messages

There are times when you might want to enable custom exotic jazzy error messages instead of the boring 404 or 500 messages. It can be due to various reasons. Security, to avoid scarring the users, etc.,

You can enable custom error messages either server wide (all applications in a server) or only to a specific application in the server.

To enable it across all applications :

This is specific to Glassfish v2.1.1, for the other versions it may vary.

Method 1:  editing the domain.xml manually.

  1. goto glassfish\domains\{domain directory}\config\domain.xml
  2. copy the custom error message html files here.
  3. Open the domain.xml file located in the same folder.
  4. Find the tag “virtual-server”, which has an attribute “id” with the value “server”
  5. Add a sub element to that virtual-server element like this.

<property name=”send-error_1″ value=”code=404 path=404.html”/>

the name attribute value has to be of the format send-error_n where n is a number. for different errors increase the value of n and change the value of value attribute.    The value has two parts code and path.    code is the three digit HTTP response code, eg: 400, 404, 500, 502 etc..    the value of path is the path to the custom error html page. this path is relative to the position of this domain.xml.

Method 2:  Via Admin Console.

  1. login to Administration Console.
  2. In the tasks menu on the left, goto Configuration => HTTP Service => Virtual Servers => server
  3. In the right pane, scroll down to the bottom, and click on the Add Property button.
  4. under name enter “send-error_1”, and under value enter “code=404 path=404.html”
  5. similarly for every error add a property and enter the values, for explanation on values for name and value refer to Method 1.

To enable it to a specific deployed application :

Custom error messages can be enabled for a particular deployed application, by modifying the deployment descriptor file. In web.xml file, under the web-app element add the following,

<error-page>
          <error-code>404</error-code>
          <location>/404.html</location>
</error-page>

You can add multiple error-page element for each error. the location, is the path to the custom error page.   The path is relative to the docroot of the application.

Reference:
http://docs.oracle.com/cd/E13222_01/wls/docs81/webapp/web_xml.html
http://docs.oracle.com/cd/E19879-01/821-0183/abhaq/index.html
http://docs.oracle.com/cd/E19879-01/821-0183/abhfg/index.html

Office

Belling the Cat (or) How i made OrangeScape to tango with Tomcat

A month or two ago, a few colleagues of mine, wanted to run OrangeScape in Tomcat for their local testing,   just because they didn’t like jboss and run it.  I wanted them to run through the internet and do it by themselves, so i can chance upon and see if anyone is up there to pull into my team when the time comes.  But they got busy with other things, and forgot about running OrangeScape in Tomcat, and went with jboss.  Now, i am fed up of waiting.  So here i am documenting my findings on how to make OrangeScape tango with TomCat.

I have tested with  Tomcat 6.0.35 and 7.0.23 with JDK 1.6.29.   Running OrangeScape Application in Tomcat is really easier than what i had expected.  really.

Getting the Application War files

You can download the war of your application from the studio.  download the supporting war files (os-commons, runtime, static) from the corresponding build release notes from the community.  The Current release number being 98.

Making changes to make it work in Tomcat

You will have to make changes to 2 files for the OrangeScape Application to work in Tomcat.

    • app.war\WEB-INF\applicationContext.xml
      • search and find the line
        <prop key=”hibernate.connection.datastore”>jdbc/{application-id}</prop>
      • change it to look as below
        <prop key=”hibernate.connection.datastore”>java:comp/env/jdbc/{application-id}</prop>
    • app.war\META-INF\context.xml
      • this file does not exist in the war and you will have to create it. the content should be as follows.
        <?xml version=”1.0″ encoding=”UTF-8″ ?>
        <Context>
                <Resource
                        name=”jdbc/{application-id}”
                        auth=”Container” type=”javax.sql.DataSource”
                        username=”{db-user- id}” password=”{db-user-password}”
                        driverClassName=”com.mysql.jdbc.Driver”
                        url=”jdbc:mysql://localhost:3306/{db-name}”
                        maxActive=”20″
                        maxIdle=”5″
                />
      </Context>

That’s it.  Now, just deploy it to tomcat, and access the url http://localhost:8080/{application-id}/1/signin to log in and start using your application.

Note:

  • The first time, you start the server after making the deployment,  (immediately after deployment, if you use the manager to deploy), you can see a HTTP 500  error will be thrown on the console.  That is because the application is not yet setup but the sla is checking if any jobs are there to execute.
  • as far as i have checked things work.  Webservices & reports were throwing errors. But it could be due to some mistake of mine, since i was in a hurry, to go to sleep.UPDATE:  Everything works fine, as i had noted, it was because of one of my mistakes.

Reference:

Office

User Management via SQL for OrangeScape Applications

User Management, is one of the new functionalities available in OrangeScape Applications created after the late september release. For those applications which were created before that release, during migration, the required fields would have been created. But, the developers will have to make the additional changes like, process, forms by going thru the User Management available in the newer applications.

This post is not about explaining the UserManagement part or about what changes have to be made for UserManagement to work in older applications. This post is about creating users and managing user roles via the backend, directly in the database.

First a small understanding about the models of User Management. UserManagement is made of the following models: User, UserChanges, RoleSubscription, AppRole


I know the image is not so good. The diamonds are sightly wrong. Hope, atleast you will be able to understand the one and many sides. Ok.

So there is the AppRole model, which has all the roles which you have defined in the application. You don’t have to insert anything there, the application reads your process and creates the required records for the roles defined there. Next there is the User model, where all the user records are stored. Then There is the Role Subscription model, where you define the roles to which the user is assigned / subscribed to. The UserChanges model has the change history of the user’s details.

When you create a user a record is created in UserChanges model, and all required details are entered. Once you save the record, the actual User record is created. Once this is done, you can assign roles to the User, by inserting records to the RoleSubscription Model. To know how to do these things from the application, please refer the corresponding section in http://learn.orangescape.com.

If you are wondering why the initial User details are being captured in UserDetails,. Once a record is created in User model for a User, the whole process of user creation is complete. When you do a create new in Orangescape Application, that record is immediately saved in the backend, and then only it is given for you to edit. Which means, when doing a create new in User Model, a new, empty User record will be created. which is BAD. I you want to know how this data entered in UserChanges syncs up with User and how all changes done for the User in UserChanges are listed from User, read the post here.

Ok. so now, we know a little of how User Management Works, (any questions / doubts, for the better good of other people, please ask them in the community forums). There is a beautiful & easy UI provided in the application. Make sure you do check it out. But, as everyone knows, it is a pain to enter 60 or 70 user records. Well, then you have the CSV import option available, which even validates if your data is right or not. But, for some, for some reason, they need to insert data into the database directly. (In the case of integration). Lets proceed with inserting data directly back in to the backend.

NOTE:

This is for those versions where you use a database for backend. On Premise or GAE SQL. not for GAE noSQL (BigTABLE).

download this SQL. It has the sample queries, and the minimum required Columns and the data required for those columns. use those queries as templates, and insert the data as required in the backend.   Since updates would never make sense, with Cache in place, Modification are to be done via the application.

Office, programming, What the...

Birthday Alerts app [version 2]

When i wrote last time how i made the birthday alerts app , i wrote, how, because of a property of SLA, the data had to be entered at that time of the day when you want to receive on the desired day.  (I know i am being confusing because i am trying to say it all in one line, read the previous post and you will understand).  Well now you don’t have to,  and have made the logic behind the app mucho simpler.  Well, the process flow part is just one activity, apart from start and end, unlike the previous time, when it was like, 7 or 8 activities.  It is still 7 fields, and all logics have been moved to actions.

The activity Reminders has  a SLA set on it.  The time duration is obtained from the field RemindIn.  The two actions in this are, one to calculate the time, when to remind and the other one, to send the reminder and reset it for the next round.  The changed datamodel is as follows.

The calculations done to get the RemindIn time duration is as follows.

  • DayNextYear = (DATE(YEAR(NOW())+1,MONTH(Birthday),DAY(Birthday)))
  • DayThisYear = (DATE(YEAR(NOW()),MONTH(Birthday),DAY(Birthday)))
  • RemindIn = IF(Birthday>NOW(),(Birthday-TODAY()-1)*24+HOUR((TODAY()+1)-NOW()),IF(DayThisYear>NOW(),(DayThisYear-TODAY()-1)*24+HOUR((TODAY()+1)-NOW()),(DayNextYear-TODAY()-1)*24+HOUR((TODAY()+1)-NOW())))
Well that’s it.  Think anything is amiss, or some other thing, that should be there, lemme know.
About this app:
This is a simple app, which takes in the date on which you have to be reminded, and sends you a mail about what you want to be reminded.   It will automatically remind you every year [since, you see, birthday’s / Anniversaries come only yearly].
Office

The Cancel problem!

Anyone who has been using OrangeScape, will know about the auto save feature (in runtime).   (I use the word “auto save”,  just coz,  i could not think of any other word, which will explain what i am trying to say).

Once you type in a field in your application, and move to the next field, the first field value will be automatically saved.   (Didn’t they say they use AJAX too).   But sometimes, this is not required.   So, for those who do not require that data be saved to the backend immediately, OrangeScape has three options which you can set, and control when the data is sent back to the server, to be saved to the backend.  In Form Design, in the Ribbon, under the category Cell,  you can see three options : Dont Submit, Submit, Submit on Dependency.    Lemme briefly say what these three options are.

  • Dont Submit : as the name says, the field data will not be sent to server, as and when the field data changes.  The whole form will be submitted when you submit the form.
  • Submit : The field data will be sent to server on every change.
  • Submit on Dependency : The data will be sent only if there are any formula / rule which depends on the value of the field which is currently being changed.  All data till that point which has been entered will be sent to server.  for ex.  if you have 3 fields, name, qty and amount.  Amount will be calculated based on formula which involves qty, then when you change qty, the name field value will also be sent to server.

The default is Submit on Dependency.

Ok! after all these details, you must be thinking, then what is this Cancel button problem that i am talking about.  If you aren’t, then, after reading the scenario, you will.

Disclaimer:  The following process / scenario / Persons in this scenario is a fictional one.  any resemblance to an existing process / scenario / Persons is purely coincidental.    Few words of technical mumbo-jumbo can be seen.  I apologize.

Warning : This is a really long & boring blog post.

Let’s assume, that you are updating profile details, which you created when you first logged in to this awesome application created with orangescape.  Let’s see what details are there in the  form.  There is the usual, first name, last name, date of birth,  address, country, state, city.  This person, who is editing, has moved to a city in another State.

Let’s make another assumption,  that all fields have been set as Dont Submit.  so what will happen, the user will change the address, the State in the State drop down, then will try to change the City in the City drop down.  But wait!  the city drop down is still showing cities belonging to the previous chosen State.  Why? because, the application does not know, what are the cities in the State which you have chosen.  It has to communicate with the server, and then find out what are the cities in the state you have chosen, and then show it to you.  But it has been set as Dont Submit, which means communications to server have been bared till you submit the page.

Now, this leaves us with no other choice, other than to change the setting to Submit on Dependency on the three drop downs (Country, State, City).    Now after changing the address, the person chooses the state, now, a request will be sent to server the fetch the data for the list of cities belonging to the state chosen.  At this point, in the request sent to the server, the data which has been modified till this point will be submitted.  Which means the address has been saved.

Well, what’s the problem you ask?! The person had a change of mind, and does not want to carry on with the update, and want to cancel and go back to how it was previously.  Unfortunately, it can’t be done.  the data has been changed and saved.  There is no way of getting it back.  Ah!  now you get the problem?!!

So how to do we solve this problem?!!  During a discussion at work, a person with whom i work with, gave this idea, which he had used in another application of his.  Which was good.  And this would also help those who want to track every change that their users did, during every submit.  Yeah!  you are thinking right.

Data versioning

Data Versioning,  essentially what you will be doing is, whenever you open a record to edit, you will be actually opening a copy of the data to edit, and if everything is ok, and you want to submit the changes, then they will be changed, else you can discard the data.  (or you can store it and see how the user changed the data from what to what, etc.,  But be responsible with what you do.)

Whatever you wanna call this :  This is not the only way.  May be there are (will be) other alternatives.  If you are going to use this, welcome aboard.

Ok!  let’s say we have one model, for which we want to implement this.  Let’s see how to do it.

Now, keep only those fields, which are absolute necessary in this model.  Leave out fields for validations, flags for permissions and all those unnecessary stuff.  Create one more model, when you can write all those unnecessary stuff + all your fields which are absolute necessary.  Let’s call the model with absolute necessary fields as Transactions and the other model with all the unnecessary stuff + absolute necessary fields as History.

We need 2 more fields in the Transaction model as connections to History model, and 1 field in History model as connection to Transaction model.    The following logic, will be really confusing, coz, i was getting confused with this logic, when listening, when explaining, and also when doing.  So most probably i will be writing this in the same confused way.  Be prepared to be confused.

In Transactions model i am going to create three actions.

  • Create History
    • A NEW Command which will create a copy of the current data in History model for editing
  • Update Transactions
    • An UPDATE Command, which sets the value of Draft into Current.
  • Workflow
    • An UPDATE Command, which releases the History command from draft state to commit state.
    • A Submit Command, to proceed in the workflow.

Now lets move to History Model.  Here i am going to create just one action

  • Update Entries
    • Two PARENTCALL Commands, first one to call Update Transactions, and 2nd one to call WorkFlow action.
In Transactions in all fields i have written DGET() to fetch data from History model, based on the “Current”  Reference Field.  As for the forms, i had designed the forms in History Model, and used that form as a Many To One form via Draft reference field.  and for the submit, i had used the Update Entries Action.  thats it.  O!  i totally forgot, that CreateHistory action, that will fire everytime a record comes to the activity, i have used the Connect -> Action in the Process Design to set it.
So how it Works?!#@.
Once you create a new record or submit an existing record, the Create History will be called, and a new History record will be created with existing set of values.  For ex:  lets consider a simple model Transaction

So, when a new record is created / a record is submitted , CreateHistory action is called, and a History record is created via the NEW command, and reference to the newly created record is set in the Draft Reference Cell in Transactions model and a reference to Transactions model is set in History reference cell in History model.  Now based on the Draft reference cell, the newly created History model record will show in the form as a Many to one record.  The user will make changes / enter data in this form.    So, all data are being entered into History model record.  Till the user clicks submit button all data will reside in History model.  If the user decides to cancel, then you can either discard the History model record, or store it as editing history.

Ok!  So when the user submits, via the call child action we set the value of the Current Reference cell to the value in Draft Reference Cell.  Now, all DGET() functions in the Transaction model will fire based on the Current Reference Cell value, and sync the data which was entered / changed in the current submitted version of the History Record.  Then we will clear the Draft Reference Cell value to blank, so that the Current Referenced and Draft Referenced records are not the same records.

So That’s it.  I have done it and tested, and it is working.  But don’t take my word for it.  Please try it out, and understand what is really happening.  This may or may not get complex based on your scenarios.    I know,  i know, it is already confusing, and i am writing this whole thing, in the middle of the night, half asleep.

If you still got questions, please post your questions in the comments.  Or, if you think you have an even better idea, lemme know, we can make things work better,.

Office, programming, What the...

Delete records along with Worktop count change

Well, well, well, sometimes, in your application you would have to delete your records because you created one request extra, or instead of raising a request in this you raised a request under another category.

If the application is developed in Orangescape,  if you delete a record, the count in the worktop will not go down, it will continue to show the count which was there before the record was deleted.  So how do you delete the records, and also have proper count in worktop?!!!

Orangescape, it seems is done with the assumption that no transaction record will be deleted, but will be archived.  Yeah, but in some places, it is not a requirement, you are free to delete the records.  So how to do that.  Let’s see how to do it.

Warning :  This involves using System Models, if not done properly, could lead to problems, and i am not to be held responsible

Assumptions :  I hope you know how to delete a record from inbox, because that is the method, am gonna use and am not going into the details.  Also, i hope that you guys understand the relationships which exist with your models and the system models.

Ok.  So, if you still reading, lets do it.  before that, a little explanation,  We will be dealing with one System Model => Process Instance.  But because of the procedure which we are going to do, data in two system models will be affected. They are => Process Instance, Process Audit.  Process Instance and Process Audit, together hold the who did what in the application,  information.    OK, lets get down, and get our hands dirty, shall we?

Open the Process Instance Model, Create a new action, and name it something, lets say, “Eraser”.  In that action, add the delete command.  Save the Model and close it.

Now, open the model where you want the delete functionality.  add a new action or you can also use the existing action which you have configured to delete the record.   The first command that you are going to add in this action is CallParentAction.  in CallParentAction properties,

  • for Parent – Model you will choose ProcessInstance(ProcessInstance).
  • for ParentAction-Name you will choose the action which we created now, in this case “Eraser”

the next command that you will add is the delete command. that’s it.  The action design should look like this.

Now add this action to the inbox…  Now Whenever you execute this action,  the Worktop count  will also be recalculated to reflect the changes…..

Woah!  Howzzzzat?!

Update:  as @Vaithi_G puts it, this is good for models with straight process flow, but if there are branches you will have to include atleast two more CallParentActions for each OR or AND branches, which will make the config tedious for every branching in process, as he had suggested, you can use RCall command once instead of CallParentAction command.  If you use RCall, the Parameter configuration is as follows

  • Choose ProcessInstance in Model-Name list.
  • for CallAction-Name enter the Action Name, again in this case “Eraser”
  • for Search-Criteria enter =Criteria(ProcessInstance.InstanceId=SheetId)

Another Update@VivekMadurai says, that in next version, the first method (using CallParentAction) itself will delete all Process Instances even though there are multiple branches.  so for the sake of performance, don’t use the RCall Command method after 2 weeks.

Office, programming, What the...

Birthday Alerts App

A few months back during a discussion in the office @JohnPrawyn was talking about some difficulty in doing a birthday alerts app using orangescape, (orangescapian’s birthday alerts is his responsibility).  Then I just forgot about it.

2 days back,  i fell asleep while reading, and i had a dream about the implementation of the birthday app,  I woke up and quickly noted down what ever i had remembered.  It was simple, but there were some small problems….  So here is the process flow, which i had dreamed.

BirthDayReminderProcessFlowInOrangeScape

There are no ways to schedule things as of now (maybe some feature is on the pipeline!),   so i decided to use the SLA feature.  Checks are done to find the Birthday every month / week / day (based on the nearness of the alert day).

The above process flow, is flawless on the paper, now have set some data to check it.

So, are you still wondering what was the small problem that i had talked about.  Yeah, there is a problem there.  A feature of SLA is that the time duration to check will start off once the request reaches the particular activity.  which means, for ex.  In the Process Flow above, if the request reaches the activity Initialize, at say evening 4:30 and the SLA hours are set as 24 hours, then the activity will next happen tomorrow at 4:30 in the evening.  Which means, to get the reminder at the beginning of the day, i should sit at the beginning of the day and set the reminder.  Will need to find a work around on that.  (i have one, but let me first finish the testing of this first, it everything works fine, then i will implement it out.)

O!  and there are only 6 fields (+ system fields) in the model,

BirthDayReminderModel

Well anyway, i had dreamed of solutions before, but nothing of this sort has happened.  It would usually be like conversing with someone and coming up with something, but not the complete answer to a question.