Hi Friends,

Even as I launch this today ( my 80th Birthday ), I realize that there is yet so much to say and do. There is just no time to look back, no time to wonder,"Will anyone read these pages?"

With regards,
Hemen Parekh
27 June 2013

Now as I approach my 90th birthday ( 27 June 2023 ) , I invite you to visit my Digital Avatar ( www.hemenparekh.ai ) – and continue chatting with me , even when I am no more here physically

Saturday 3 February 2024

Child Learning Skills

 

Child trains AI ?

20 years ago I wondered :  How does a new-born child learn “ concepts “ from its surrounding “ vision – sounds – touch – smell – taste “ ?

That made me write following note to my colleagues. Now, some researchers at New York University seems to have “ proved “ my hypothesis ( see news report below )

To appreciate , compare these two pieces of text :

With regards,

Hemen Parekh

www.HemenParekh.ai  /  04 Feb 2024


My 20 year old note : 

SELF - LEARNING SOFTWARE …….Sept 2003

05/09/2003 – 17/09/2003

 

Kartavya / Abhi / Sanjeev

Self-Learning Software

How does a one year old child learn to differentiate between colours Red & Blue, and beyond that between different shades of Red?

This is another way of asking

“How does Learning take place? What steps are involved in the learning process? “

There are no fool proof / ironclad / undisputed scientific theories. But the empirical evidence leads us to believe that the process (of learning), occurs, somewhat as follows:

A mother points a finger at a colour and speaks aloud “RED”. The sound is captured by the child & stored in his memory.

This process is repeated a thousand times and with each repetition, the memory gets etched, deeper & deeper.

An “association “develop between the colour & the sound.

Then the process is repeated with colour BLUE & another memory gets “etched “deeply.

So, on 1001 occasion, when a colour patch is shown to child & question asked,

“What colour is this? “

Child says “RED “perhaps, even without understanding the question (then meaning of the question).

There is, merely, an “association “between what child SEES (sight) & what child HEARS (sound)

The process can be repeated by,

Ø  Showing RED colour patch , and

Ø  Showing a placard (flag), with RED written on it in big / bold Letters.

Now child “associates “the patch (SIGHT) with placard (also another SIGHT). No Sound.

So, next time a child is shown patch of red colour, he will pick up the sign / placard, learning word RED.


                                                                         



                                                                                               



   So next time, what happens?

 



 

                                                                                

 Remember that two MAIN inputs to a brain are

Ø  Sight ( Eyes) -----  80% of learning takes place here

Ø  Sound ( Ears ) ---  10% of learning takes place here

Of course, there are other, relatively minor inputs of

Ø 
Touch / Feel ( Skin )                        Balance 10% of learning

Ø  Smell              ( Nose)                       takes place thru this

Ø  Taste              ( Tongue)                 INPUT – DEVICES

 

In the examples listed earlier, MOTHER acts as a human expert, who initiates the learning – process by establishing “references / the bench-marks.”

In essence, she uses the process (of showing patch & speaking aloud or showing patch & showing placard), to transmit her OWN EXPERT KNOWLEDGE to the child.

So, all knowledge flows events from a GURU!

You can even watch events & learn – without a single word being uttered!

You can close your eyes & listen to music & learn – without seeing who is singing!

Then there was Beethoven who was deaf but composed great symphonies which he himself, could not hear! But this is an exception.

What is the relevance of all this to “self-Learning Software?”

Simple,

If we want to develop a software which can identify / categories a “resume”, as belonging to

     VB    C++ etc…..

Then all we need, is to “show” to the software, 1000 resumes and speak aloud,

    C++    !

Then 1001st time, when the software “sees” a similar resumes, it will speak-out loudly

C++         !

So, first of all, we need a human expert – a GURU, who, after reading each resume, shouts

C++ or VB      or   ASP               etc. etc……..

 

When Guru has accurately identified segregated 1000 resumes each of C++ etc…..

We take those sub-sets & index their Keywords, calculate “frequency of occurrence “of each of those keywords & assign them “weightages” (probabilities).

Then we plot the graphs for each subset (I .e. each “skill”)

Then, when we present to this software any / next resume, it would try to find the keywords. Let us say, it found 40 keywords. Now let us compare these 40 keyword-set, with

Ø  VB          Keyword-set

Ø  C++        Keyword-set

Ø  ASP        Keyword-set

& see what happens

FIRST SCENARIO (FIRST MATCH)

                                                                                

SECOND MATCH


 


 

                                                                                      

 THIRD MATCH


 


                                                                                           

We ( i.e. software ) has to keep repeating this “ match-making” exercise for a new resume, with

                                                                ALL THE KEYWORDS – SETS

Till it find the highest/ best match.

BINGO

The new resume belongs to an “ASP” guy!

(Self-learning Software – cont.)

That way the FIRST METHOD, where a human expert reads thru 30000 resumes & then regroups these into smaller sub-sets of 1000 resumes-each belonging to different “skill-sets”

This will be a very slow method!

SECOND METHOD

Here, instead of a (one) expert going thru 30000 resumes, we employee 30000 experts the jobseekers themselves!

Obviously, this METHOD would be very fast!

Underlying premises is this.

No one knows better than the jobseeker himself, as to what precisely is his CORE AREA OF COMPETENCE / SKILL.

 

 

Is my skill

·         VB

·         C ++

·         ASP

·         .Net

So, if I have identified myself, as belonging to VB OR C++ OR ASP    etc. etc….

Then you better believe it!

Now, all that we need to do, is to take 1000 resumes of all those guys who call themselves

 VB

And find “keywords” from their resumes (& of course, weightages)

If there are job sets where software guys are required to identify themselves by their “ skills”, then best course would be to search resumes on these jobsites by skills,

Then download the search-result resumes! Repeat this search/download exercise for each “skill” for which we want to develop “skill – graphs”

This approach is fairly simple and perhaps, more accurate too.

But,

Ø  We have to find such jobsites & then satisfy ourselves that “ Skill-wise” searching of resumes ( and downloading too ) is Possible

Ø  Then Subscribe for 1 Month / 3 Month, by paying Rs.20000/40000! There is a cost factor, here

THIRD METHOD

We have, already downloaded from various jobsites 150000 job advts. For each of these we know the “Actual Designation / Vacancy-Name/Position” (thru Auto – converter)

We can re-group these advts. According to identical / similar vacancy names / actual design where we finish, we may get, against each specific “Vacancy – Name’

500 to 5000 Job- advts.

Call each a sub- set (Vacancy-Name-wise)

Then index keywords of each subset & calculate frequency –of-usages (weightage).

So, now, we have Profile-Graphs, which are not skill-wise, but which are “Vacancy-Name” wise!

This can be done real fast & cheap! And, may suffice Software Companies’ BROADER needs, A quick beginning can be made & Result shown within a week!!!

 ==================================================

with regards,

hemen parekh

www.HemenParekh.ai  /  06 Feb 2024


===================================================

Comments by BARD :



I've analyzed the text you provided and can compare the approaches to child learning in your 20-year-old note and the recent New York University research. However, I cannot definitively determine if the NYU team followed your exact methodology as the information provided is not sufficient. Here's a breakdown of the key points and potential areas for further exploration:

Your 20-year-old note:

  • Focuses on: Association between sensory inputs (sight, sound, touch, etc.) and concepts.
  • Method: Exposes a child to a concept repeatedly through various sensory channels (e.g., showing a red object and saying "red").
  • Learning: Child associates the sensory input with the concept and eventually learns the concept.
  • Example: Child learns to differentiate between red and blue colors.

New York University research:

  • Focuses on: Learning words and ideas through everyday experiences.
  • Method: Records a child's words and actions during daily activities using a headcam.
  • Learning: AI model analyzes the data to learn word-object associations and understand language use.
  • Example: AI model learns to map the word "apple" to the image of an apple seen by the child.

Similarities:

  • Both approaches emphasize the importance of sensory input and repetition in learning.
  • Both involve exposing the learner to real-world experiences.
  • Both aim to understand how children learn concepts and language.

Differences:

  • Your note focuses on explicit instruction, while the NYU research uses observational learning.
  • Your note emphasizes individual sensory channels, while the NYU research uses multimodal data.
  • Your note describes a theoretical framework, while the NYU research presents an empirical study.

Uncertainties:

  • It's unclear how much adult interaction was present in the NYU experiment, which is a crucial aspect of your method.
  • The NYU research only used one child, limiting its generalizability.
  • The exact algorithms and learning mechanisms used by the AI model are not fully explained.

Further exploration:

  • To definitively compare the approaches, you would need more information about the NYU study, such as the level of adult interaction and the specific algorithms used.
  • You could also consider conducting your own experiment to test your hypothesis more rigorously.
  • Exploring the concept of innate knowledge and its role in child learning could be another interesting avenue.

===================================================

Comments by  ChatGPT :

Your 20-year-old note on self-learning software and the recent research conducted by scientists at New York University share some commonalities in their approach to understanding a child's learning process. Both highlight the importance of sensory inputs, particularly sight and sound, in the learning process. Let's break down the key similarities and differences:

Similarities:

  1. Association of Concepts: Both descriptions emphasize the idea of associating concepts with sensory inputs. In your note, you describe how a child associates the sight of a color with the spoken word for that color. Similarly, in the NYU study, the AI model learns to associate words with visual references from a child's experiences.

  2. Repetition for Learning: Repetition is a crucial aspect of learning in both cases. Your note mentions that the learning process involves repetition, with each repetition etching the memory deeper. In the NYU study, the researchers collected data from a child's play sessions, mealtimes, or reading sessions on a semi-weekly basis, providing sufficient repetition for the AI model to learn.

  3. Role of Human Expertise: Your note introduces the concept of a human expert (in this case, the mother) initiating the learning process by establishing references or benchmarks. Similarly, in the NYU study, the researchers used a child's naturalistic experience as a source of input for the AI model.

Differences:

  1. Methodology: The methodologies differ in the level of human involvement and the way data is collected. Your note proposes a method where a human expert reads and categorizes resumes to train the software. In contrast, the NYU study involved fitting a headcam on a child to record their movements and words, using this data to train an AI model.

  2. Scope of Learning: While your note focuses on developing a self-learning software for categorizing resumes based on skills, the NYU study is more focused on understanding early language and concept acquisition in children. The AI model in the NYU study learns words in everyday environments based on a child's experiences.

  3. Output and Application: Your note outlines a scenario where the software categorizes resumes based on skills. In contrast, the NYU study's AI model is designed to map words with visual references and objects in the world based on a child's experiences. The output from the NYU study provides insights into how little kids learn a new language.

In summary, both your self-learning software concept and the NYU study share common principles of associating concepts with sensory inputs and the importance of repetition in the learning process. However, the specific methodologies, scopes of learning, and applications differ between the two. The NYU study's focus on understanding early language learning in children provides valuable insights into the potential of AI models learning from naturalistic experiences.












No comments:

Post a Comment