As far as Recruitment ( Online or Offline
) is concerned , there has been a lot of technological progress in the past 15 years
But , we are nowhere near building of an EXPERT SYSTEM , which takes over from human
recruiters, tasks that can be better / faster done by a SOFTWARE , leaving humans free to do what
software cannot
What are these tasks ?
In August 2002 , I read a book :
EXPERT SYSTEMS : Principles and Case
Studies
( Edited by : Richard Forsyth )
My following
handwritten remarks in the margin of this book , might provide an insight into
the components of such a system
On my 84th birthday , I dedicate these remarks to my
co-professionals in the Recruitment Industry , from whom I continue to learn
Hemen Parekh
27 June 2017
Mumbai
Page 2
·
I read his “ Human Use of
Human Beings “ , around 1957 / 58 . Weiner was called “ Father of Cybernetics “
– in fact , he coined the word “ Cybernetics “
Page 6
·
Is this like our saying : “ IF such and such Keywords appear in a
resume, THEN , it may belong to such and such INDUSTRY or FUNCTION ?
Page 7
·
I believe the “ KNOWLEDGE “ contained in our 65,000 resumes , is
good enough to develop an Expert System ( ARDIS – ARGIS ) . We started work on
ARDIS – ARGIS in 1996 ! But taken up seriously , only 3 months back !
Page 8
·
Keywords are nothing but “
descriptions “ of Resumes
Page 11
·
I
believe ISYS manual speaks of “ Context Tree “ – so does Oracle Context
Cartridge ( Themes )
Page 15
·
“
Hypothesized Outcome “ > In our case , Hypothesized Outcome could be ,
a Resume ,
# getting shortlisted by 3P
# getting shortlisted by Client
, OR
# a candidate getting “
appointed “ ( after interview )
·
“
Presence of the Evidence “ > In our case , the “ presence of the evidence
“ could be , presence of certain “ Keywords “ in a given resume ( the horse )
OR , certain “ Edu Quali “ OR , certain “ Age “ , OR certain “ Exp ( years )
“ OR certain “ Current Employer “ etc
Page 16
·
In our case , these several “ pieces of evidence “ could be ,
# Keywords
# Age
# Exp
# Edu Quali
# Current Industry
Background
# Current Function
Background
# Current Designation Level
# Current Salary
# Current Employer
etc
We could “ establish “ ODDS ( for each piece of evidence ) and then
apply SEQUENTIALLY to figure out the “ Probability / Odds “ of that particular
resume getting “ Shortlisted / getting Selected “
We have to examine ( Statistically ) , resumes of ALL candidates
shortlisted during last 13 years – to calculate the ODDS
Page 18
·
“ Automating “ the process of Knowledge Acquisition ? We could do
this ( automating ) by getting / inducing the Jobseekers to select / fill in ,
Keywords themselves online , in the web form
Page 19
·
I suppose this has become a reality during the last 13 years since
writing of this book
·
The “ Decision Support “ that our consultants need is :
“ From amongst thousands of resumes in our data bank , which “ few “
should be sent to the Client ? Can software locate those few AUTOMATICALLY ,
which have “ Excellent Probability “ of getting shortlisted / selected ? “
Our consultants , today , spend a lot of time in doing just this
manually – which we need to automate
Page 20
·
These ( few ) resumes are GOOD for this VACANCY
Page 22
·
According to me , this “ notation “ is :
All human Thoughts / Speech and Action , are directed towards
either increasing the happiness ( of that person ) , OR
towards decreasing the pain ,
by choosing from amongst available Thoughts / Spoken Words /
Actions .
This “ notation “ describes ALL aspects of Human Race
This ability to choose the most appropriate option ( at that point
of time ), makes a human being “ intelligent “
Page 23
·
There are millions of “ words
“ in English language – used by authors of books and poets in songs and lawyers
in documents , but the words of interest to us are those used by Jobseekers in
Resumes and by Recruiters in Job Advts .
This is our area of expertise
·
Program = Control + Data (
Probabilities of 10,000 keywords occurrence amongst “ past successful “
candidates )
·
Problem Description > See
remarks at the bottom of page 19 , for OUR problem description
Page 25
·
RESUMIX ( Resume Management Software ) claims to contain 100,000 “
rules “
Page 26
·
Our expertise in “ matchmaking “ of Jobseekers and “
Vacancies “ of Recruiters
·
Our business does fall in such “ Specialist “ category
·
Persons who have spent 15 years reading resumes /
deciding their “ suitability and interviewing candidates
Page 27
·
Agree ! We do not expect “
Expert System “ to conduct interviews ! Our consultants do spend 2 / 3 hours
daily in reading / shortlisting resumes
·
We want a “ Decision Support
System “ to assist our consultants , so that they can spend more time in “
interview “ type of “ assessment “
·
If , during last 13 years ,
we have placed 500 executives , then we / client must have “ shortlisted “
5,000 resumes . These are enough “ Test Cases “
Page 28
·
In last 13 years, this has grown may be 50 times ! – so that cannot
be a limitation
·
I had perceived this as far back as 1996
·
Now ( in 2002 ) , expert systems have become an “ Essential “ to
survival of all Organizations . We can ignore it at our peril !
Page 29
·
We can become VICTORS or VICTIMS : choice is ours
·
I am sure , by 2002 , we must have many “ MATURE “ expert system “
Kernels “ / “ Shells “ , commercially available in the market
·
We don’t need but we could talk to IIT ( Powai ) , TIFR or NCST
professors of AI / Expert System for guidance
Page 30
·
Ask NCST ( Juhu Scheme ) if they can train us
·
May be we could send an email to Mr FORSYTH himself , to seek his
guidance . We will need to explicitly state > our problem > solution
which we seek , from the Expert System and ask him which , commercially
available “ Shell “ does he recommend { Email : Richard.Forsyth@uwe.ac.uk }
Page 32
·
How many
does this Directory list in 2002 ?
·
Google
still shows CRI – 1986 , as the latest !
·
But , “
Expert Systems “ in Google returned 299,000 links !
·
I took a
course in X-Ray Crystallography at KU in 1958
Page 33
·
When developed, our system would fall under this category
·
Most certainly, we should integrate the two
Page 35
·
The resumes shortlisted by our proposed “ Expert System “ ( resumes
having highest probability of getting shortlisted ), must be manually “
Examined “ – and assigned “ Weightage “ by our consultants and these “
Weightages “ fed back into the System
Page 37
·
I
believe, our system will be simple “ Rule – based “ – although, there may be a
lot of “ processing “ involved in “ Sequential “ computation of Probabilities
for “ Keywords “ related to :
# Industry / Function /
Designation Level / Age / Exp / Edu Quali / Attitudes / Attributes / Skills /
Knowledge / Salary / Current Employer / Current posting location / family etc
Page 39
·
In my notes on ARDIS – ARGIS , see notes on “ Logic for……. “ . Here I have listed the
underlying rules
Page 40
·
Expert Knowledge ( - and consequently the RULES ) contained in
RESUMIX have relevance to USA jobseekers – and their “ style “ of resume
preparation .
These ( rules ) may not apply in Indian context
Page 41
·
We are
trying to establish the “ Relationship “ between :
# Probability of occurrence of
a given “ keyword “ in a given resume,
WITH
# Probability of such a resume
getting “ shortlisted “
·
REASONING
WITH UNCERTAIN INFORMATION
( Author’s Note :
Many expert systems unavoidably operate in task
domains where the available information is inherently imprecise ( rules derived
from experts are inexact, data values are unreliable etc )
My Comment :
If we have lost
the resumes of 5,000 candidates who got shortlisted during last 13 years
Only “ Age “ and “ Exp ( years ) “ are dependent in our case
Page 42
·
Exp ( years ) can never be
> Age ( years )
·
So , we will need to prepare
a comprehensive list of “ inconsistencies “ , with respect to a resume eg : as
shown above
Page 43
·
We should ask both ( the Expert System and the Experts ) to
independently shortlist resumes and compare
·
We have to experiment with building of an expert system which would
“ test / validate “ the assumption
:
# If certain ( which ? ) “ keywords
“ or “ search parameters “ are are found in a resume , it has a higher
probability of getting shortlisted / selected
Page 44
·
Eg: System shortlisting a “ Sales “ executive against a “
Production “ vacancy !
·
What / Which “ cause “ could have produced , What / Which “ Effect
/ Result “ ?
Page 45
·
In our case, the expert system, should relieve our consultants to
do more “ Intelligent “ work of assessing candidates through personal
interviewing
Page 47
·
Eg:
# Entering email resumes in “ structured “
database of Module 1
# Reconstituting a resume ( converted bio-data
) through ARGIS, automatically
For this “ tasks
“ , we should not need human beings at all !
Read “ What
Will Be “ ( Author : Michael Dertouzo / MIT Lab / 1997 )
Page 48
·
Even when our own Expert
System “ shortlists “ the resumes ( based on perceived high probability of
appointment ), our consultants would still need to go through these resumes
before sending to Clients . They would need to “ interpret “
·
Read all of my notes written
over last 13 years
Page 50
·
Our future / new consultants
, need to be taken thru OES , step by step thru the entire process – thru
SIMULATION – ie a fake Search Assignment
·
Our “ TASK AREA ” is quite
obvious – but may not be simple , viz: we must find the “ right “ candidates
for our clients, in shortest possible time
·
In 13 years, since this book
was written , “ Mobile Computing “ has made enormous strides . Also internet
arrived in a big way in 1995 .
By March 2004 , I envisage our Consultants
carrying their laptops or even smaller mobile computers & search our MAROL
database for suitable candidates ( of course , using Expert System ) , sitting
across client’s table
Page 51
·
“ To increase Expert productivity “ ( ie our consultants’
productivity ) and “ To augment Expert Capability “ ( ie to automate as many
business processes as possible ) , are our objectives
Page 58
·
Resumes are “ data “ but when
arranged as a “ Shortlist “ , they become “ information “ , because a “
shortlist “ is always in relation to our “ Search Assignment “ !
It is that
search assignment that lends “ meaning “ to a set of resumes
Page 59
·
Are “ Resumes “ , knowledge about “ people “ and their “
achievements “ ?
Page 60
·
But , is a human , part and parcel of nature ? Human did not create
nature but did nature create human ? Our VEDAS say that the entire UNIVERSE is
contained in an ATOM. May be they meant that an entire UNIVERSE can arise from
an atom
·
Are 204 “ Industries – Names “ and 110 “ Function Names “ ,
granular enough ? Can we differentiate well ?
Page 61
·
Read “ Aims of Education “ by A N Whitehead
·
Inference is process of drawing / reaching “ conclusion “ based on
knowledge
Page 62
·
Calculating
“ probabilities of occurrence “ of keywords & then comparing with Keywords
contained in resumes of “ Successful Candidates “
Page 64
·
IF a
resume “ R “ , contains keywords “ a / b
/ c “
, AND
if resumes of all past “ SUCCESSFUL “ candidates ALSO contain
keywords “ a / b / c “ ,
THEN ,
The chances are that , resume “ R “ will also be “ Successful “
·
Our
expert system will be such an “ Automatic Theorem Proving System “ , where “
inference rules “ will have to be first figured out / established , from large
volumes of past “ co-relations “ between “ Keywords “ & “ Successes “
·
“
Successes “ can be defined in a variety of ways , including : Shortlisting :
Interviewing : Appointing : etc
Page 67
·
In our case too , we are trying to “ interpret / diagnose “ the “
symptoms “ ( in our case , the Keywords ) , contained in any given “ patient “
( resume ) & then “ predict “ , what are its chances ( ie probabilities )
of “ success ( = getting cured ), ie: getting shortlisted OR getting
interviewed OR getting appointed
Page 68
·
For us , there are as many “ rules “ as there are “ keywords ‘ in
the resumes of past “ successful “ candidates – with the “ frequency of
occurrence “ of each such keywords ( in , say , 7,500 successful resumes ) ,
deciding its “ weightage “ , while applying the rule
·
These resumes can be further sub-divided according to > Industry
> Function > Designation Level etc , to reduce population size of
keywords
Page 69
·
Our initial assumption :
>
Resume of “ Successful ( past ) Candidates “ , always contains keywords a / b /
c / d /
> Process > Find all other
resumes which contain a / b / c / d /
> Conclusion >
These should succeed too
* Our Goal >
Find all resumes which have high probability of “ success “
* System should automatically
keep adding to the database of all the actually “
successful “ candidates as each search assignment gets over
Page 70
·
In 1957 , this was part of “
Operations Research “ course at University of Kansas
·
With huge number-crunching
capacities of modern computers , computational costs are not an important
consideration any more
Page 71
·
Cut finger and pain follows
·
Somewhat similar to our situation of resumes & keywords
Page 72
·
We are
plotting “ frequency of occurrence “ of keywords in specific past resumes to
generalize our observations
·
Knowledge
Keywords / Skills keywords / Attitude Keywords / Attribute Keywords / Actual
Designations
We can construct a TABLE like this ( with such column headings ) ,
from our 65,000 resumes & then try to develop “ algorithm “
·
In
above table , the last column ( Job or Actual Designation ) can also be
substituted by : Industry OR Function OR Edu Quali etc,
And a new set of “ algorithms “ will emerge
Page 74
·
Our resumes also “ leave out “ a hell of a lot of “ variables “ !
A
resume is like a jigsaw puzzle with a lot of missing pieces !
We are basing on
statistical forecasting Viz: “ frequency of occurrence “ of certain keywords
& attaching ( assigning ) probability values
Page 75
·
This statement must be even
more true today – 13 years since it was first written
Page 76
·
Just imagine , if we can locate & deliver to our client , just
THAT candidate that he needs ! – in the first place, just THOSE resumes which
he is likely to value / appreciate
·
I am sure , by now superior languages must have emerged. Superior
hardware certainly has , as has “ conventional tools “ of database management
Page 77
·
Perhaps what was “ specialized “ hardware in 1989 , must have
become quite common today in 2002 – and fairly cheap too
Page 81
·
We must
figure out ( - and write down ) , what “ Logic / Rules “ our consultants use (
even subconsciously ) , while selecting / rejecting a resume ( as being “
suitable “ ) for a client – need .
Expert System must “ mimic “ a human expert
·
We are
basing ourselves ( ie : our proposed Expert System ) on this “ type “ ( see “
patterns “ in book “ Digital Biology “ )
Page 87
·
Illness = Industry or Function
·
Symptoms = Keywords
·
Probability that this keyword ( symptom ) will be observed / found
, given that the resume ( patient ) belongs to XYZ “ Industry “ ( illness )
Page 88
·
Random Person = any given “ incoming “ email resume
·
Influenza = “
Automobile “ industry
·
Based on our past population ? (“ Auto “ resume) divided by ( all
resumes ) probability
·
If symptoms = keywords , which symptoms ( keywords ) , have
appeared in “ Auto “ industry resumes , OR which keywords have NEVER appeared
in “ Auto “ resumes , OR appeared with low frequency ?
Page 89
·
Pattern matching ( as
demonstrated in book “ Digital Biology “ )
·
With addition of all keywords
( including NEW keywords – not previously recorded as “ keyword “ ) from each
NEW / INCOMING resume , the “ prior probabilities “ will keep changing
Page 90
·
In this “ resume “ , assuming it belongs to a particular “ INDUSTRY
“
,
# what keywords can be expected
# what keywords may NOT be expected
·
We can also reverse the “ Reasoning “ viz:
# What “ INDUSTRY “ ( or FUNCTION ) might a
given incoming resume belong to, if :
·
Certain keywords are absent ?
The “ result / answer “
provided by Expert System , can then be tested / verified with WHAT jobseeker
himself has “ clicked “
Page 91
·
Reiteration : so each new incoming
resume would change the “ Prior Probability “ ( again and again ) , for each
> Industry > Function > Designation Level > Edu Quali > Exp ,
etc
( Handwritten graph drawn on page where X axis = No of resumes , and Y axis
= Probability ) , initial wide oscillations would converge as more and more
resumes get added to the database )
Page 92
·
With
65,000 resumes ( ie; patients ) & 13,000 Keywords ( symptoms ) , we could
get a fairly accurate “ Estimate “ of “ Prior Probabilities “ .
This will keep
improving / converging as resume database & keywords database keeps growing
( especially , if we succeed in downloading thousands of resumes ( or job advts
) from Naukri / Monster / Jobsahead etc
·
Eg ;
Birthdate as well as Age
·
This
is good enough for us
Page 93
·
Eg ; In Indian Resumes , keyword “ Birthdate “ would have the
probability of 0.99999 !
Of course , most such keywords are of no
interest to us !
Page 95
·
For our putpose , “ keywords “ are all , “ items of evidence “ .
If
each & every “ keyword “ found in an ( incoming ) resume, corresponds to
our “ hypothesis “ ( viz: keywords A & B & C , are always present in
resumes belonging to “ Auto Industry “ ) , then we obtain max possible “
Posterior Probability “
·
So , if our knowledge base ( not only of keywords , but phrases /
sentences / names of current & past employers / posting cities etc ) is
VERY WIDE & VERY DEEP, we would be able to formulate more accurate
hypothesis & obtain higher “ Posterior Probability “
Page 96
·
So , the key issue is to
write down a “ Set of Hypothesis “
Page 97
·
Let us
say, keyword “ Derivative “ may have a very LOW “ frequency of occurrence “ in
65,000 resumes ( of all Industries put together ) but , it could be a very
important keyword for the “ Financial Services “ Industry
Page 98
·
Eg:
certain keywords are often associated ( found in ) with certain” Industries “
or certain “ Functions “ ( Domain keywords )
Page 99
·
With each incoming resume , the probability of each keyword ( in
the keyword database ) will keep changing.
·
Eg ; Does “ Edu Quali “ have any role / effect / weightage in the
selection of a candidate ?
·
Eg; What is the “ Max Age “ ( or Min Exp ) at which corporate will
appoint a > Manager > General Manager > Vice President ?
Page 100
·
Line of Best Fit ?
Page 103
·
Say , every 20th
incoming resume belongs to > XYZ industry > ABC function ( based on
frequency distribution in existing 65,000 resumes )
Page 104
·
If an
incoming resume belongs to “ Auto Industry “ , it would contain keywords “ Car
/ Automobile “ etc OR Edu Quali = Diploma in Auto Engineering
·
Eg;
Probability of selection of an executive is 0 ( zero ), if he is above 65 years
of age !
·
One
could assign “ probability of getting appointed “ OR
even “ probability of getting shortlisted “, for each “ AGE “ or for each “ Years of
Experience “ , or for each “ Edu Level “ etc
Page 105
·
Obviously :
# a person ( incoming resume )
having NO EXPERIENCE ( fresh graduate ) will NOT
get shortlisted for the position of a MANAGER ( Zero probability )
# a person ( incoming resume )
having NO GRADUATE DEGREE , will NOT get shortlisted for the position of a
MANAGER ( zero probability )
# a person ( incoming resume )
with less than 5 years of experience, will NOT get shortlisted for the position
of GENERAL MANAGER ( zero probability ),
BUT,
# will get shortlisted for the position of SUPERVISOR ( 0.9 probability
)
Page 106
·
So , we need to build up a database of WHO ( executive ) got
shortlisted/ appointed , by WHOM ( client ) and WHEN and WHY ( to best of our
knowledge ) over the last 13 years & HOW MUCH his background matched “
CLIENT REQUIREMENTS “ & KEYWORDS in resumes
Page 107
·
How “ good “ or “ bad “ is a given resume for a given “ client
needs “ ?
·
Co-relating > “ search parameters “ used , with
> “ search results “ ( resumes
/ keywords in resumes ) , for each and every , online / offline “ resume search “
Page 109
·
Is this like saying , “ Resume A belongs to ENGINEERING INDUSTRY ,
to some degree and also to AUTOMOBILE INDUSTRY , to some degree ? .
Same way
can be said about “ Functions “
·
Same as our rating one resume as “ Good “ & another as “ Bad “
( of course , in relation to client’s stated needs )
Page 110
·
Eg; set A > Resume belonging to ENGINEERING industry
Set B >
Resume belonging to AUTOMOBILE industry
·
A resume can belong to both the sets with “ different degree of membership
“
Page 114
·
In our case , we have to make
a “ discrete decision “ , viz; “ Shall I shortlist & forward this resume to
client or not ? “
Page 115
·
A given keyword is present in resume , then treat the resume as
belonging to “ Engg Industry “ OR (
better ) , a given “ Combination of keywords “ being present or absent in a
resume , shall decide whether that resume should be classified / categorized
under ( say ) “ Engg Industry “
·
For us , “ real world experience “ = several sets of “keywords “
discovered in 65,000 resumes which are already categorized ( by human experts )
as belonging to Industry ( or Function ) , A or B or C
Page 118
·
Eg; A given resume is a ,
# very close match
# close match
# fairly close
match
# not a very close match
With client’s requirements
Page 119
·
I am sure , by now many more must be
commercially available
Page 127
·
Who knows , what “ NEW / DIFFERENT “ keywords would appear ( - and
what will disappear ) , in a given type of resumes, over next 5 / 10 years ?
·
We may think of our Corporate Database in these terms & call it
“ Corporate Knowledgebase “
Page 129
·
One could replace “ colour of teeth “ = Designation Level ( MD /
CEO / President / Gen Mgr etc )
·
Worth checking , now that 14 years have elapsed
·
“ Data Structure “ Tabulation with Keywords listed against
following column headings :
Industry / Function / Designation Level / Edu / Age / Current Employer
/ Skills / Attitude / Attribute
Page 132
·
Is this somewhat like saying : “ This ( given ) resume belongs to
Industry X or Industry Y ? “
·
Number of times a given resume got shortlisted in years : X / X + 1 / X + 2 / etc
·
If we treat “ getting shortlisted “ as a measure of “ Success “ (
which is as close to defining “ success “ , as we can ) = prize money won
·
Of course, in our case , “ No of times “ getting shortlisted ( in
any given year ) is a function of > Industry background > Function
background > Designation Level > Edu > Age ( max ) > Exp ( min )
> Skills > Knowledge etc
Page 133
·
Which
is what the resumes are made up of ( - sentences ) .
See my notes on ARDIS (
Artificial Resume Deciphering Intelligent System ) & ARGIS ( Artificial
Resume Generating Intelligent System )
Page 136
·
This was written 13 years ago . In last few months, scientists have
implanted micro-processors in human body & succeeded in “ integrating “
these ( chips ) into human nervous system ! eg: restoring “ vision “ thru chip
implant ( Macular Degeneration of Retina )
Page 139
·
There is no doubt Princeton
must have made tremendous progress in last 13 years
·
We now have “ Gigabyte “ (
1000 times bigger )
·
How about a resume ?
Page 140
·
( A weather predictor ) : Every nation has
a “ weather Forecast Satellite “ these days
Page 143
·
In ARDIS , I have talked about character recognition / word
recognition / phrase recognition
Page 148
·
You see what you “ want “ to see & you hear what you “ want “
to hear
·
I have no doubt these ( object oriented languages ) must have made
great strides in last 20 years
Page 149
·
Are our 13,000 keywords , the
“ Working Memory “ ?
·
Eg: Frequency distribution of
Keywords belonging to ( say ) 1,000 resumes belonging to “ Pharma “ Industry ,
is a “ pattern “ . When a new resume arrives , the software “ invokes “ this “
pattern “ to check the amount / degree of MATCH
Page 150
·
Eg : This is like starting with an assumption ( hypothesis ) that
the NEXT incoming resume belongs to “ Auto “ industry or to “ Sales “ function
& then proceed to prove / disprove it
Page 151
·
Keywords pertaining to any given “ Industry “ or “ Function “ will
go on changing over the years, as new skills and knowledge gets added . So ,
recent keywords are more valid
·
“ Balanced Score Card “ was totally unknown 2 years ago !
Page 152
·
In case of “ keywords “ , is this
comparable to ordering ( ie; arranging ) by frequency of occurrence ?
Page 153
·
Treating a child as an “ Expert System “ , capable of drawing
inferences
·
Eg; Blocks of different colours
·
Rules will remain same
·
Keywords will change over time ( working memory )
·
Gross ( or Crude ) search, to be refined later on
Page 154
·
Obtaining an understanding of
what the system is trying to achieve
·
I suppose this has already
happened
Page 155
·
See my notes on “ Words and Sentences that
follow “
Page 178
·
We are thinking in terms of past “ Resumes “ which have been “
shortlisted “
·
I have already written some rules
·
This must have happened in last 13 years !
Page 179
·
We must question our consultants as to what logic / rules do they
use / apply ( even subconsciously ) to “ shortlist” or rather select from
shortlisted candidates
Page 181
·
List of “ Knowledge Acquisition Tools “ developed by us :
Highlighter /
Eliminator / Refiner
/ Compiler /
Educator / Composer
/
Match-maker /
Member Segregator / Mapper ( to be developed )
Page 186
·
I have written some rules but many more
needs to be written
Page 191
·
Like “ weightages “ in Neural
Network ?
·
To find “ real evidence “ ,
take resumes ( keywords ) AND “ Interview Assessment Sheets “ of “ successful “
candidates & find co-relation
·
Eg: Rules re “ Designation
Level “ could conflict with rules re : “ Experience Years “ or rules re “ Age (
min ) “
Page 192
·
Eg: Edu Quali = CA / ICWA / MBA etc , for FINANCE function
·
Basic Degree in Commerce ( B Com )
·
“ Rule Sets “ on ,
Age ( max ) / Exp ( min ) / Edu Quali / Industry / Function /
Designation Level etc
·
If our Corporate Client can explicitly state “ weightages “ for
each of the above, it would simplify the design of the Expert System .
Mr Nagle
had actually built this ( weightages ) factor in our HH3P search engine , way
back in 1992 .
Instead of clients , consultants entered these “ weightages “ in
search engine ; but resume database was too small ( may be < 5,000 )
Page 197
·
Quacks Vs Doctors
·
Quacks , sometime do “ cure “ a disease,
but we would never know why or how !
Page 198
·
We have databases of ,
# Successful ( or “ failed “ )
candidates . and
# their “ resumes “ & “
Assessment Sheets “
,
and
# keywords contained in these
resumes
Page 199
·
We will need to define which are “ successful “and which are “
failure “ candidates ( of course , in relation to a given vacancy ) .
We want
to be able to predict which candidates are likely to be “ successful “
·
We must have data on past 500 “ successful “ & 5000 “ failure “ candidates in our database
Page 222
·
Abhi : this is exactly what
our “ Knowledge Tools “ do :
Highlighter /
Eliminator / Refiner
/ Sorter /
Matchmaker / Educator
/ Compiler /
Member Segragator etc
Page 227
·
Diebold predicted such a factory in his 1958 book , “ Automatic
Factory “
·
Someday, we would modify our OES ( Order Execution System ) in such
a way that our clients will “ Self Service “ themselves
·
This was written in 1989
·
E Commerce defined 13 years ago
·
“ SIVA “ ( on Jobstreet.com ) has elements of such a Self-Serving
system
Page 232
·
Statistical Analysis of thousands of job advts of last 5 years ,
could help us extrapolate next 10 year’s trend
Page 236
·
We are planning a direct link from HR Managers to our OES ( which
is our factory floor ) .
We still need “ consultants “ – but only to interact (
talk ) with clients and candidates ; not to fill in INPUT screens of OES !
Page 237
·
Obviously , Author had a vision of Internet – which I could
envision 2 years earlier in my report to L&T Chairman
( QUO VADIS /
1987 )
Page 239
·
Norbert Weiner had predicted this in 1958
---------------------------------------------------------------------------------------------------
No comments:
Post a Comment