• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
Avinash Meetoo

Avinash Meetoo

Let us build a Smart Mauritius together

  • Home
  • About
  • Curriculum Vitae

Technology

Computer and Information Security are key to the Digital Transformation of Mauritius

18 January 2020 By Avinash Meetoo 4 Comments

On Thursday 16 January 2020, I did a presentation on the “Privacy Aspects of ICT Projects” during a conference organised by the Data Protection Office of the Ministry of Technology, Communication and Innovation. A few days prior to the conference, while I was reflecting on what I was going to say, I had an intuition: Mauritius will have to undergo a profound digital transformation at all levels: government, companies, schools and individuals, if we want to attain Vision 2030.

This is why I decided to start my presentation by explaining to the audience what Vision 2030 means. I am always quite amazed, during the various talks I make, that only a few people know about this vision of the Government which was first mentioned by the then Prime Minister on 22 August 2015. Today, Vision 2030, which is about making Mauritius an inclusive high-income country well before 2030, is what dictates the various strategies and actions being done by Government.

For me, the best way to be a high-income country and thus reach the status of Smart Mauritius is by our business entities, the conglomerates but also the SMEs including startups, getting more revenue. Given the limited size of the Mauritian market, this needs to be done through the development of new products and new services for new markets (especially the African market). This is why Government is investing massively in Smart Infrastructure, Smart Mobility and Smart Education, etc.

Now, to create new products and new services (in any field) and conquer new markets, one has to use technology to the full whether one requires a tried-and-trusted technology like Linux or the latest fashionable thing such as Artificial Intelligence.

A lot of organisations have realised that technology in 2020 is about software. Software is eating the world after all. Our organisation, be it in Government or our companies, will have to either implement or develop software. It’s no wonder that 2/3 of the job offers on the LinkedIn website in 2018 were for software engineers (15% for IT people and 15% for data science people).

Now, the software needs to be trusted by all users, especially the ones giving their personal data. Therefore, making sure that the software respects the requirement of the Mauritian Data Protection Act or the European General Data Protection Regulations is key for the organisation to be trusted. Privacy has become so important that an organisation which acquires a reputation of not protecting the data of its clients or users is essentially moribund…

How can we implement or develop software which collects and processes personal data (as provided by users and clients) and which offers all guarantees that the privacy of the individual is going to be protected? This is quite a challenge and two technical avenues can be explored: computer security and information security.

Computer security is about the hardware and software aspects: making sure that the principle of security by design is followed from the very beginning (i.e. users only have to have access to the subset of data which they really require and this can be done through the principle of least privilege and proper access control). Then, security measures such as two-factor authentication, encryption, firewalls and intrusion detection mechanisms can be used to secure the infrastructure further. I am quite confident that using something like the Linux operating system for deployment is a great idea (ask Google, Amazon and Facebook!). Interestingly, the last time I used Windows for anything serious was around 2006, 14 years ago. Since then, I’ve been on Linux and macOS and I’m very happy. Of course, it is also important to train users to identify threats and to respond correctly when there is an incident.

I then spoke about information security which, interestingly, is more about the most vulnerable point in IT, the human. The best way to protect the information within an organisation is through the establishment of a good security policy (which needs to be fully understood and followed to the letter by all). It is also important to have physical security for the people and the equipment. This can be a challenge because people move. Laptops and smartphones today contain valuable information, and being so easy to steal, it is important to have a proper asset management system for business continuity. Of course, data needs to be protected (backups, mirroring, etc.) as well as the network.

Underlying everything is making sure that all layers, including all software, respect the requirements of laws such as our own Data Protection Act which mandates that users giving their data can also modify or erase the data afterwards. The law also mandates that the user be informed whenever his data is being collected so that he can give his consent or not. This is quite a challenge from a software development point of view.

Adhering to everything in computer security and in information security is quite difficult, costly and can be a lengthy process. But the reward is more trust and this is only way to get more business and, hence, more income and profit.

As I had been given 20 minutes for the presentation, I could not go into details in everything. I was telling a friend, after the presentation, that, if I was still at Knowledge Seven, I would have maybe created a 30 hour long training on this important topic.

By the way, I didn’t reinvent the wheel. A lot of my presentation was based on what I read online, most notably on Wikipedia which I love.

Thanks to everyone who came to the presentation and I hope you learned a few things. I definitely did while preparing it and doing it. Thanks to @yurit0s for the photo where I am visible.

Filed Under: Computing, Education, Future, News, Science, Society, Technology

The Digital Economy: Challenges and Opportunities in Research for Mauritius

1 July 2019 By Avinash Meetoo 5 Comments

Last week, I was invited to give a presentation on La question du Numérique: Enjeux, défis et perspectives de la recherche pour le dévéloppement socio-économique de Maurice during the Assises de la Recherche organised by l’Université des Mascareignes.

For all the non-French speaking readers of this blog, I meant to say that I was invited to give a speech on The Digital Economy: Challenges and Opportunities in Research for the Socio-Economic Development of Mauritius. This was during the Research Week organised by the University of Mascareignes, one of the four public universities in Mauritius, notable for its affiliations with French universities and for the use of French as the medium of instruction.

I started my presentation with Vision 2030 of the Government which is about transforming Mauritius into an high-income and inclusive country well before 2030. For this vision to become true, a number of growth enablers have been identified:

They are having good infrastructure (Internet, roads, buildings, hospitals, etc.), having good education (the 9-year schooling reforms, free education for undergraduate students in public universities and polytechnics, etc.), good governance, economic integration (or, else, one can forget about inclusiveness…) and, of course, innovation.

We are fortunate to (more or less) have the first four in Mauritius. Concerning innovation, we still have some work to do but things are moving in the right direction thanks to the contribution of startups, incubators and some of the private companies which exist.

Interestingly, Vision 2030 also speaks of six growth sectors, namely: agriculture (sustainable, eco-friendly…), the ocean economy (for food security and tourism), tourism (new products, new markets, new airline routes…), manufacturing (high-tech, new markets…), financial services (regulations, international…) and ICT services (export, skills development, new products…)

At this point, I asked a question to the audience: how many of them were aware of this strategic plan for Mauritius, namely Vision 2030? Only a handful were and I told them that there are two culprits: us (for not having marketed the document properly) and them (for not being curious). Interestingly, we all agreed that this was not very good, hence my focus on writing a few posts on this blog referring to Vision 2030 and giving links to the official documents…

I then talked about the research perspectives. I told them to, first of all, form multidisciplinary teams of researchers and students, identify an important problem in one of the growth sectors, make sure that the problem is a big one instead of being a trivial one and work hard on solving the problem!

Easier said than done obviously. But much needed if we want to transform the country.

I then spent a few minutes talking about essential emerging technologies that they could use to solve the problems identified. I focused on Internet of Things (to collect data with sensors), on databases and blockchains to store data (the latter being for data which should not be tampered with), on analytics (which I like to call statistics) to infer things and on Deep Learning, once again to infer things, but only when the data is too big or too unstructured.

The interesting thing is that a lot of people told me afterwards that I had made this part really easy to understand for them. I’m happy about that.

At the end, I told them a big thank you and that their contribution counts in making the Republic of Mauritius (which includes Rodrigues, Saint Brandon and Agaléga) a better place.

Filed Under: Computing, Education, Finance, Future, Science, Society, Technology

Artificial intelligence, deep learning and chatbots demystified

24 April 2019 By Avinash Meetoo Leave a Comment

Developer Conference 2019, also known as DevCon 2019, took place in Mauritius from 11 – 13 April 2019. Once again, Jochen Kirstätter and his team at the Mauritius Software Craftsmanship Community worked fantastically to make this event a reference throughout the region. All geeks of Mauritius and a few from neighbouring countries made it a must to attend and this means that speakers had quite a lot of pressure this year to deliver!

Since the very beginning, I knew that I was going to speak about Artificial Intelligence. As I told the audience, my (selfish) reason was for me to know more: the best way to learn is to teach. Of course, I also wanted other people to know more. Hence, my focus of starting from the fundamentals and demystifying everything. The full code is on Github.

For logistical reasons, I chose to do two presentations.

Presentation #1: How Deep Learning Works

This is what I submitted to Jochen and his colleagues:

Everyone is talking of Artificial Intelligence today as the next Big Thing! This session explains, from the point of view of a programmer, what a Neuron is, how a Neural Network can be built and how to use frameworks such as TensorFlow and TFLearn to quickly experiment with Deep Learning.

I was fortunate to have a good photographer, Sumeet Mudhoo, present at the beginning of my talk and he kindly gave me permission to use a few of his photos to illustrate this post.

I started with how a simple artificial neuron capable of learning works. As I always do, I like to stand on the shoulders of giants and, therefore, I based this part of my presentation on an article found online: How Neural Networks Work. The programming part is fascinating. One neuron is just a simple function which takes some inputs and a corresponding number of weights and creates one output (generally by doing a dot product). The answer therefore can vary a lot. Using a Sigmoid function as the one pictured above makes sure the neuron can only produce an answer between 0 and 1 (which is perfect for digital computers). Another benefit is that, because of the shape of the Sigmoid function, extremes are ignored.

Another fascinating aspect of the code is the learning process. At the beginning, the weights of the neuron are far from what they should be and, consequently, the result produced is not very good (compared to the expected results). The distance between the two is then calculated and this distance is then used to refine the weights. One beautiful aspect of this learning process is that the derivative of the Sigmoid is used, once again to ignore extremes.

I then moved to showing the audience how the Tensorflow library, created by Google, works and can be used to create a neural network: a network of many neurons arranged in layers (one input layer, one output layer and 1-2 hidden layers, hence “deep” learning).

Once again, I relied on a online post, this time Tensorflow demystified. Tensorflow is quite complex to use as a programmer needs to be very explicit in the way the neural network is expressed.

This is, in essence, how a neural network is built in Tensor flow:

# hidden layers and their nodes
n_nodes_hl1 = 32
n_nodes_hl2 = 32

# classes in our output
n_classes = 2

# random weights and bias for our layers

# Initialize weights and biases with random values.
# We also define our output layer.
hidden_1_layer = { 'f_fum':  n_nodes_hl1,
                   'weight': tf.Variable(tf.random_normal([len(train_x[0]), n_nodes_hl1])),
                   'bias':   tf.Variable(tf.random_normal([n_nodes_hl1])) }

hidden_2_layer = { 'f_fum':  n_nodes_hl2,
                   'weight': tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])),
                   'bias':   tf.Variable(tf.random_normal([n_nodes_hl2])) }

output_layer = { 'f_fum':  None,
                 'weight': tf.Variable(tf.random_normal([n_nodes_hl2, n_classes])),
                 'bias':   tf.Variable(tf.random_normal([n_classes])) }

# Let's define the neural network:

# hidden layer 1: (data * W) + b
l1 = tf.add(tf.matmul(data, hidden_1_layer['weight']), hidden_1_layer['bias'])
l1 = tf.sigmoid(l1)

# hidden layer 2: (hidden_layer_1 * W) + b
l2 = tf.add(tf.matmul(l1, hidden_2_layer['weight']), hidden_2_layer['bias'])
l2 = tf.sigmoid(l2)

# output: (hidden_layer_2 * W) + b
output = tf.matmul(l2, output_layer['weight']) + output_layer['bias']

Phew! Lots and lots of lines of code which, obviously, is too prone to errors for someone not to have thought of creating a higher-level library which is easier to use.

This is why I quickly transitioned to TFLearn, a higher-level API to Tensorflow, based yet again on another article: Deep Learning in 7 lines of code. Using TFLearn, the same neural network can easily be built like this:

net = tflearn.input_data(shape=[None, 5])
net = tflearn.fully_connected(net, 32)
net = tflearn.fully_connected(net, 32)
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net)

# DNN means Deep Neural Network
model = tflearn.DNN(net, tensorboard_dir='tflearn_logs')

The softmax function is a function that takes as input a vector of K real numbers, and normalizes it into a probability distribution consisting of K probabilities. That is, prior to applying softmax, some vector components could be negative, or greater than one; and might not sum to 1; but after applying softmax, each component will be in the interval (0,1), and the components will add up to 1.

As for the use of Linear Regression, it is a Supervised Learning algorithm which goal is to predict continuous, numerical values based on given data input. From the geometrical perspective, each data sample is a point. Linear Regression tries to find parameters of the linear function, so the distance between the all the points and the line is as small as possible. The algorithm used for parameters update is called Gradient Descent.

On arriving at this point, everyone present (including myself) understood what a neuron is, what is a neural network is and how to build one capable of learning using TFLearn. With this new knowledge, we were all ready to build a chatbot.

Presentation #2: Building a Chatbot using Deep Learning

This is what I submitted to Jochen and his colleagues:

Smart devices of today, powered by e.g. Google Assistant, Apple Siri or Amazon Alexa, can chat with us in quite surprising ways. It is therefore interesting for a developer to understand how chatbots work. In this session, we will build a chatbot using Deep Learning techniques.

Building a chatbot is quite straightforward using TFLearn, especially with high quality articles such as Contextual Chatbots with Tensorflow. The idea is to have a set of sentences and, for each, a set of possible responses. Now, in the past, this would have been done using a set of rigid if-then-else statements. Today, we tend to use two novel programming techniques.

Firstly, the sentences are not sentences actually. Rather they are patterns which are then matched with what the user is asking, using the Natural Language Toolkit. This is done to make sure that words are stemmed so that the chatbot becomes easier to interact with (in the sense that the user is not forced to use specific words or tenses).

The second interesting part is that the chatbot is contextualised. In my example of a chatbot knowing about the beaches in Mauritius, here is an example interaction:

==> Where are the best beaches?
Where are you staying at this moment?

==> In the North
Mon Choisy is one of the longest beaches in Mauritius.

The second response is conditional to the first question (“What are the best beaches?”) being asked. This is done using training data such as the following (found on Github in full):

{
  "tag":"beach",
  "patterns":[
    "Beach",
    "Seaside",
    "Place to swim"
  ],
  "responses":[
    "Where are you right now?",
    "In which part of Mauritius do you plan to go?",
    "Where are you staying at this moment?"
  ],
  "context_set": "beach"
},
{
  "tag":"beach_north",
  "patterns":[
    "North",
    "Northern",
    "Grand Baie"
  ],
  "responses":[
    "Trou aux Biches is shallow and calm, with gently shelving sands, making it ideal for families.",
    "The water is deep at Pereybere but still very calm.",
    "Mont Choisy is one of the longest beaches in Mauritius.",
    "La Cuvette is a tucked-away jewel and one of the shortest beaches in Mauritius."
  ],
  "context_filter":"beach"
}

The context_set happens as soon as the user asks for beach, seaside or place to swim (or variations therein) and the responses about the four beaches in the north will then be conditional on (1) the user saying that he is in the north, in the northern part of Mauritius or at Grand Baie and (2) that the context for beach was set previously.

The rest is just about creating a vector of 0’s and 1’s based on the input of the user (where each 0 and 1 correspond to whether or not one of the stem words are present in the input, out of all possible stem words from all patterns defined). This vector is then used to predict an output based on learning previously done and a response selected randomly. The complete code can be found in the article indicated previously.

At the end, I was happy that my two objectives had been attained: I knew more and the audience knew more. Perfect.

Devcon 2019 Presentation: How Deep Learning works and Building a Chatbot using Deep Learning from Avinash Meetoo

Filed Under: Computing, Education, Future, News, Science, Society, Technology

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 23
  • Page 24
  • Page 25
  • Page 26
  • Page 27
  • Interim pages omitted …
  • Page 36
  • Go to Next Page »

Primary Sidebar

Search

I am proud of

My family
My company
My music
My photos

I am active on

Facebook
FB Page
Twitter
LinkedIn
Reddit
Hacker News
Stack Overflow
GitHub
Wikipedia
YouTube
IMDB
Last.fm

All posts

  • April 2025 (1)
  • March 2025 (1)
  • February 2025 (2)
  • January 2025 (4)
  • December 2024 (4)
  • November 2024 (2)
  • October 2024 (1)
  • September 2024 (4)
  • July 2024 (1)
  • June 2024 (1)
  • May 2024 (4)
  • March 2024 (1)
  • February 2024 (1)
  • January 2024 (1)
  • December 2023 (2)
  • November 2023 (2)
  • October 2023 (2)
  • August 2023 (2)
  • July 2023 (3)
  • June 2023 (3)
  • May 2023 (4)
  • April 2023 (1)
  • March 2023 (3)
  • January 2023 (1)
  • December 2022 (1)
  • November 2022 (5)
  • September 2022 (2)
  • June 2022 (2)
  • May 2022 (1)
  • April 2022 (1)
  • January 2022 (3)
  • November 2021 (1)
  • September 2021 (1)
  • June 2021 (3)
  • April 2021 (1)
  • February 2021 (1)
  • January 2021 (2)
  • November 2020 (1)
  • October 2020 (1)
  • September 2020 (1)
  • August 2020 (1)
  • May 2020 (1)
  • April 2020 (3)
  • March 2020 (4)
  • January 2020 (1)
  • July 2019 (1)
  • June 2019 (1)
  • April 2019 (2)
  • January 2019 (1)
  • December 2018 (3)
  • November 2018 (1)
  • September 2018 (1)
  • August 2018 (1)
  • April 2018 (1)
  • December 2017 (1)
  • November 2017 (1)
  • October 2017 (1)
  • August 2017 (1)
  • July 2017 (1)
  • May 2017 (1)
  • March 2017 (1)
  • February 2017 (1)
  • August 2016 (1)
  • July 2016 (1)
  • June 2016 (3)
  • April 2016 (3)
  • March 2016 (3)
  • February 2016 (1)
  • December 2015 (1)
  • November 2015 (1)
  • October 2015 (1)
  • August 2015 (5)
  • June 2015 (1)
  • September 2013 (2)

Copyright © 2025 by Avinash Meetoo · Shared under an Attribution 4.0 International Creative Commons license · Log in