#1 2020-09-13 14:32:37

CamillaVet
Member
From: Canada, Thamesville
Registered: 2020-09-11
Posts: 6

especially LSTM (a useful and mainstream type or RNN)

Hypothesis Test for real problems.
August 26, 20200 Comments in , , Statistics  by Saurav Singla      Hypothesis tests are significant for  evaluating  answers to questions concerning samples of data.
statistic al hypothesis is a belief made about a population parameter.
This belief may or might not be right.
In other words, hypothesis testing is a proper technique utilized by  scientist  to support or reject statistical hypotheses.
The foremost ideal approach to decide if a statistical hypothesis is correct is examine the whole  population .
Since that’s frequently im practical , we normally take a random sample from the population and inspect the equivalent.
Within the event sample data set isn’t steady with the  statistic al hypothesis, the hypothesis is refused.
Types of hypothesis:   There are two sort of hypothesis and both the Null Hypothesis (Ho) and  Alternative  Hypothesis (Ha) must be totally mutually exclusive events.
• Null  hypothesis  is usually the hypothesis that the event wont’t happen.
•  Alternative  hypothesis is a hypothesis that the event will happen.
Why we need Hypothesis  Testing .
Suppose a specific cosmetic producing company needs to launch a new Shampoo in the market.
For this situation they will follow Hypothesis Testing all together decide the success of new product in the market.
Where likelihood of product being ineffective in market is undertaken as Null Hypothesis and likelihood of product being profitable is undertaken as Alternative Hypothesis.
By following the process of Hypothesis testing they will foresee the accomplishment.
How to Calculate Hypothesis Testing.
State the two theories with the goal that just one can be correct, to such an extent that the two occasions are totally unrelated.
Now figure a study plan, that will lay out how the data will be assessed.
Now complete the plan and genuinely investigate the sample dataset.
Finally examine the outcome and either accept or reject the null hypothesis.
Another example   Assume, Person have gone after a typing job and he has expressed in the resume that his composing speed is 70 words per minute.
The recruiter might need to test his case. On the off chance that he sees his case as adequate, he will enlist him in any case reject him. Thus, he types an example letter and found that his speed is 63 words a minute.
Presently, he can settle on whether to employ him or not.
In the event that he meets all other qualification measures.
This procedure delineates Hypothesis Testing in layman’s terms.
In statistical terms Hypothesis his typing speed is 70 words per minute is a hypothesis to be tested so-called null hypothesis.
Clearly, the alternating hypothesis his composing speed isn’t 70 words per minute.
So, normal composing speed is population parameter and sample composing speed is sample statistics.
The conditions of accepting or rejecting his case is to be chosen by the selection representative.
For instance, he may conclude that an error of 6 words is alright to him so he would acknowledge his claim between 64 to 76 words per minute.
All things considered, sample speed 63 words per minute will close to reject his case.
Furthermore, the choice will be he was producing a fake claim.
In any case, if the selection representative stretches out his acceptance region to positive/negative 7 words that is 63 to 77 words, he would be tolerating his case.
In this way, to finish up, Hypothesis Testing is a procedure to test claims about the population dependent on sample.
It is a fascinating reasonable subject with a quite statistical jargon.
You have to dive more to get familiar with the details.
Significance Level and Rejection Region for Hypothesis   Type I error probability is normally indicated by α and generally set to 0.05.  The value of α is recognized as the significance level.
The rejection region is the set of sample data that prompts the rejection of the null hypothesis.  The significance level, α, decides the size of the rejection region.  Sample results in the rejection region are labelled statistically significant at level of α.
The impact of differing α is that If α is small, for example, 0.01, the likelihood of a type I error is little, and a ton of sample evidence for the alternative hypothesis is needed before the null hypothesis can be dismissed.
Though, when α is bigger, for example, 0.10, the rejection region is bigger, and it is simpler to dismiss the null hypothesis.
Significance from p-values   A subsequent methodology is to evade the utilization of a significance level and rather just report how significant the sample evidence is.
This methodology is as of now more widespread.  It is accomplished by method of a p value.
P value is gauge of power of the evidence against null hypothesis.
It is the likelihood of getting the observed value of test statistic, or value with significantly more prominent proof against null hypothesis (Ho), if the null hypothesis of an investigation question is true.
The less significant the p value, the more proof there is supportive of the alternative hypothesis.
Sample evidence is measurably noteworthy at the α level just if the p value is less than α.
They have an association for two tail tests.
When utilizing a confidence interval to playout a two-tailed hypothesis test, reject the null hypothesis if and just if the hypothesized value doesn’t lie inside a confidence interval for the parameter.
Hypothesis Tests and Confidence Intervals   Hypothesis tests and confidence intervals are cut out of the same cloth.
An event whose 95% confidence interval reject the hypothesis is an event for which p<0.05 under the relating hypothesis test, and the other way around.
A p value is letting you know the greatest confidence interval that despite everything prohibits the hypothesis.
As such, if p<0.03 against the null hypothesis, that implies that a 97% confidence interval does exclude the null hypothesis.
Hypothesis Tests for a Population Mean   We do a t test on the ground that the population mean is unknown.
The general purpose is to contrast sample mean with some hypothetical population mean, to assess whether the watched the truth is such a great amount of unique in relation to the hypothesis that we can say with assurance that the hypothetical population mean isn’t, indeed, the real population mean.
Hypothesis Tests for a Population Proportion   At the point when you have two unique populations Z test facilitates you to choose if the proportion of certain features is the equivalent or not in the two populations.
For instance, if the male proportion is equivalent between two nations.
Hypothesis Test for Equal Population Variances   F Test depends on F distribution and is utilized to think about the variance of the two impartial samples.
This is additionally utilized with regards to investigation of variance for making a decision about the significance of more than two sample.
T test and F test are totally two unique things.
T test is utilized to evaluate the population parameter, for example, population mean, and is likewise utilized for hypothesis testing for population mean.
However, it must be utilized when we don’t know about population standard deviation.
On the off chance that we know the population standard deviation, we will utilize Z test.
We can likewise utilize T statistic to approximate population mean.
T statistic is likewise utilised for discovering the distinction in two population mean with the assistance of sample means.
Z statistic or T statistic is utilized to assess population parameters such as population mean and population proportion.
It is likewise used for testing hypothesis for population mean and population proportion.
In contrast to Z statistic or T statistic, where we manage mean and proportion, Chi Square or F test is utilized for seeing if there is any variance inside the samples.
F test is the proportion of fluctuation of two samples.
Conclusion   Hypothesis encourages us to make coherent determinations, the connection among variables, and gives the course to additionally investigate.
Hypothesis for the most part results from speculation concerning studied behaviour, natural phenomenon, or proven theory.
An honest hypothesis ought to be clear, detailed, and reliable with the data.
In the wake of building up the hypothesis, the following stage is validating or testing the hypothesis.
Testing of hypothesis includes the process that empowers to concur or differ with the expressed hypothesis.
https://data-science-blog.com/wp-content/uploads/2020/08/hypothesis-test-for-real-problems-p-value-header.png        400        997          Saurav Singla          Saurav Singla  2020-08-26 12:33:00 2020-08-17 07:57:01 Hypothesis Test for real problems       Understanding LSTM forward propagation in two ways.
August 21, 20200 Comments in Artificial Intelligence, , , Data Science Hack, Deep Learning, Machine Learning, , Predictive Analytics  by Yasuto Tamura     *This article is only for the sake of understanding the equations in the second page of the paper named “LSTM: A Search Space Odyssey”.
If you have no trouble understanding the equations of LSTM forward propagation, I recommend you to skip this article and go the the next article.
*This article is the fourth article of “A gentle introduction to the tiresome part of understanding RNN.” 1.
Preface.
I  heard that in Western culture, smart people write textbooks so that other normal people can understand difficult stuff, and that is why textbooks in Western countries tend to be bulky, but also they are not so difficult as they look.
On the other hand in Asian culture, smart people write puzzling texts on esoteric topics, and normal people have to struggle to understand what noble people wanted to say.
Publishers also require the authors to keep the texts as short as possible, so even though the textbooks are thin, usually students have to repeat reading the textbooks several times because usually they are too abstract.
Both styles have cons and pros, and usually I prefer Japanese textbooks because they are concise, and sometimes it is annoying to read Western style long texts with concrete straightforward examples to reach one conclusion.
But a problem is that when it comes to explaining LSTM, almost all the text books are like Asian style ones.
Every study material seems to skip the proper steps necessary for “normal people” to understand its algorithms.
But after actually making concrete slides on mathematics on LSTM, I understood why: if you write down all the equations on LSTM forward/back propagation, that is going to be massive, and actually I had to make 100-page PowerPoint animated slides to make it understandable to people like me.
I already had a feeling that “Does it help to understand only LSTM with this precision.
I should do more practical codings.” For example François Chollet, the developer of Keras, in his book, said as below.

For me that sounds like “We have already implemented RNNs for you

so just shut up and use Tensorflow/Keras.” Indeed, I have never cared about the architecture of my Mac Book Air, but I just use it every day, so I think he is to the point.
To make matters worse, for me, a promising algorithm called Transformer seems to be replacing the position of LSTM in natural language processing.
But in this article series and in my PowerPoint slides, I tried to explain as much as possible, contrary to his advice.
But I think, or rather hope,  it is still meaningful to understand this 23-year-old algorithm, which is as old as me.
I think LSTM did build a generation of algorithms for sequence data, and actually Sepp Hochreiter, the inventor of LSTM, has received Neural Network Pioneer Award 2021 for his work.
I hope those who study sequence data processing in the future would come to this article series, and study basics of RNN just as I also study classical machine learning algorithms.
*In this article “Densely Connected Layers” is written as “DCL,” and “Convolutional Neural Network” as “CNN.”  2.
Why LSTM?.
First of all, let’s take a brief look at what I said about the structures of RNNs,  in the first and the second article.
A simple RNN is basically densely connected network with a few layers.
But the RNN gets an input every time step, and it gives out an output at the time step.
Part of information in the middle layer are succeeded to the next time step, and in the next time step, the RNN also gets an input and gives out an output.
Therefore, virtually a simple RNN behaves almost the same way as densely connected layers with many layers during forward/back propagation if you focus on its recurrent connections.

That is why simple RNNs suffer from vanishing/exploding gradient problems

where the information exponentially vanishes or explodes when its gradients are multiplied many times through many layers during back propagation.
To be exact, I think you need to consider this problem precisely like you can see in this paper.
But for now, please at least keep it in mind that when you calculate a gradient of an error function with respect to parameters of simple neural networks, you have to multiply parameters many times like below, and this type of calculation usually leads to vanishing/exploding gradient problem.
LSTM was invented as a way to tackle such problems as I mentioned in the last article.
3.
How to display LSTM.
I would like you to just go to image search on Google, Bing, or Yahoo!, and type in “LSTM.” I think you will find many figures, but basically LSTM charts are roughly classified into two types: in this article I call them “Space Odyssey type” and “electronic circuit type”, and in conclusion, I highly recommend you to understand LSTM as the “electronic circuit type.” *I just randomly came up with the terms “Space Odyssey type” and “electronic circuit type” because the former one is used in the paper I mentioned, and the latter one looks like an electronic circuit to me.
You do not have to take how I call them seriously.
However, not that all the well-made explanations on LSTM use the “electronic circuit type,” and I am sure you sometimes have to understand LSTM as the “space odyssey type.” And the paper “LSTM: A Search Space Odyssey,” which I learned a lot about LSTM from,  also adopts the “Space Odyssey type.”  The main reason why I recommend the “electronic circuit type” is that its behaviors look closer to that of simple RNNs, which you would have seen if you read my former articles.
*Behaviors of both of them look different, but of course they are doing the same things.
If you have some understanding on DCL, I think it was not so hard to understand how simple RNNs work because simple RNNs  are mainly composed of linear connections of neurons and weights, whose structures are the same almost everywhere.
And basically they had only straightforward linear connections as you can see below.
But from now on, I would like you to give up the ideas that LSTM is composed of connections of neurons like the head image of this article series.
If you do that, I think that would be chaotic and I do not want to make a figure of it on Power Point.
In short, sooner or later you have to understand equations of LSTM.
4.
Forward propagation of LSTM in “electronic circuit type”.
*For further understanding of mathematics of LSTM forward/back propagation, I recommend you to download my slides.
The behaviors of an LSTM block is quite similar to that of a simple RNN block: an RNN block gets an input every time step and gets information from the RNN block of the last time step, via recurrent connections.
And the block succeeds information to the next block.
Let’s look at the simplified architecture of  an LSTM block.
First of all, you should keep it in mind that LSTM have two streams of information: the one going through all the gates, and the one going through cell connections, the “highway” of LSTM block.
For simplicity, we will see the architecture of an LSTM block without peephole connections, the lines in blue.
The flow of information through cell connections is relatively uninterrupted.

This helps LSTMs to retain information for a long time

In a LSTM block, the input and the output of the former time step separately go through sections named “gates”: input gate, forget gate, output gate, and block input.
The outputs of the forget gate, the input gate, and the block input join the highway of cell connections to renew the value of the cell.
*The small two dots on the cell connections are the “on-ramp” of cell conection highway.
*You would see the terms “input gate,” “forget gate,” “output gate” almost everywhere, but how to call the “block gate” depends on textbooks.
Let’s look at the structure of an LSTM block a bit more concretely.
An LSTM block at the time step  gets , the output at the last time step,  and , the information of the cell at the time step , via recurrent connections.
The block at time step  gets the input , and it separately goes through each gate, together with.
After some calculations and activation, each gate gives out an output.
The outputs of the forget gate, the input gate, the block input, and the output gate are respectively.
The outputs of the gates are mixed with  and the LSTM block gives out an output , and gives  and  to the next LSTM block via recurrent connections.
You calculate  as below.
*You have to keep it in mind that the equations above do not include peephole connections, which I am going to show with blue lines in the end.
The equations above are quite straightforward if you understand forward propagation of simple neural networks.
You add linear products of  and  with different weights in each gate.
What makes LSTMs different from simple RNNs is how to mix the outputs of the gates with the cell connections.
In order to explain that, I need to introduce a mathematical operator called Hadamard product, which you denote as.
This is a very simple operator.
This operator produces an elementwise product of two vectors or matrices with identical shape.
With this Hadamar product operator, the renewed cell and the output are calculated as below.
The values of  are compressed into the range of  or  with activation functions.
You can see that the input gate and the block input give new information to the cell.
The part  means that the output of the forget gate “forgets” the cell of the last time step by multiplying the values from 0 to 1 elementwise.
And the cell  is activated with  and the output of the output gate “suppress” the activated value of.
In other words, the output gatedecides how much information to give out as an output of the LSTM block.
The output of every gate depends on the input , and the recurrent connection.
That means an LSTM block learns to forget the cell of the last time step, to renew the cell, and to suppress the output.
To describe in an extreme manner, if all the outputs of every gate are always , LSTMs forget nothing, retain information of inputs at every time step, and gives out everything.
And  if all the outputs of every gate are always , LSTMs forget everything, receive no inputs, and give out nothing.
This model has one problem: the outputs of each gate do not directly depend on the information in the cell.
To solve this problem, some LSTM models introduce some flows of information from the cell to each gate, which are shown as lines in blue in the figure below.
LSTM models, for example the one with or without peephole connection, depend on the library you use, and the model I have showed is one of standard LSTM structure.
However no matter how complicated structure of an LSTM block looks, you usually cover it with a black box as below and show its behavior in a very simplified way.
5.
Space Odyssey type.
I personally think there is no advantages of understanding how LSTMs work with this Space Odyssey type chart, but in several cases you would have to use this type of chart.
So I will briefly explain how to look at that type of chart, .

Based on understandings of LSTMs you have gained through this article

In Space Odyssey type of LSTM chart, at the center is a cell.
Electronic circuit type of chart, which shows the flow of information of the cell as an uninterrupted “highway” in an LSTM block.
On the other hand, in a Spacey Odyssey type of chart, the information of the cell rotate at the center.
And each gate gets the information of the cell through peephole connections,  , the input at the time step  , sand , the output at the last time step , which came through recurrent connections.
In Space Odyssey type of chart, you can more clearly see that the information of the cell go to each gate through the peephole connections in blue.
Each gate calculates its output.
Just as the charts you have seen, the dotted line denote the information from the past.
First, the information of the cell at the time step  goes to the forget gate and get mixed with the output of the forget cell In this process the cell is partly “forgotten.” Next, the input gate and the block input are mixed to generate part of new value of the the cell at time step.
And the partly “forgotten”  goes back to the center of the block and it is mixed with the output of the input gate and the block input.
That is how  is renewed.
And the value of new cell flow to the top of the chart, being mixed with the output of the output gate.
Or you can also say the information of new cell is “suppressed” with the output gate.
I have finished the first four articles of this article series, and finally I am gong to write about back propagation of LSTM in the next article.
I have to say what I have written so far is all for the next article, and my long long Power Point slides.
[References] [1] Klaus Greff, Rupesh Kumar Srivastava, Jan Koutník, Bas R.
Steunebrink, Jürgen Schmidhuber, “LSTM: A Search Space Odyssey,” (2017) [2] Francois Chollet, Deep Learning with Python,(2018), Manning , pp.
202-204 [3] “Sepp Hochreiter receives IEEE CIS Neural Networks Pioneer Award 2021”, Institute of advanced research in artificial intelligence, (2020) URL: https://www.iarai.ac.at/news/sepp-hochreiter-receives-ieee-cis-neural-networks-pioneer-award-2021/?fbclid=IwAR27cwT5MfCw4Tqzs3MX_W9eahYDcIFuoGymATDR1A-gbtVmDpb8ExfQ87A [4] Oketani Takayuki, “Machine Learning Professional Series: Deep Learning,” (2015), pp.
120-125 岡谷貴之 著, 「機械学習プロフェッショナルシリーズ 深層学習」, (2015), pp.
120-125 [5] Harada Tatsuya, “Machine Learning Professional Series: Image Recognition,” (2017), pp.
252-257 原田達也 著, 「機械学習プロフェッショナルシリーズ 画像認識」, (2017), pp.
252-257 [6] “Understandable LSTM ~ With the Current Trends,” Qiita, (2015) 「わかるLSTM ~ 最近の動向と共に」, Qiita, (2015) URL: https://qiita.com/t_Signull/items/21b82be280b46f467d1b             https://data-science-blog.com/wp-content/uploads/2020/04/RNN_head_pic.png        802        1952          Yasuto Tamura          Yasuto Tamura  2020-08-21 11:40:44 2020-08-24 14:18:08 Understanding LSTM forward propagation in two ways       How Data Science Can Benefit Nonprofits.
August 4, 20200 Comments in , Use Case  by Luke Smith     Image Source: https://pixabay.com/vectors/pixel-cells-pixel-creative-commons-3704068/ Data science is the poster child of the 21st century and for good reason.
Data-based decisions have streamlined, automated, and made businesses more efficient than ever before, and there are practically no industries that haven’t recognized its immense potential.
But when you think of data science application, sectors like marketing, finance, technology, SMEs, and even education are the first that come to mind.
There’s one more sector that’s proving to be an untapped market for data—the social sector.
At first, one might question why non-profit organizations even need complex data applications, but that’s just it—they don’t.
What they really need is data tools that are simple and reliable, because if anything, accountability is the most important component of the way non-profits run.
Challenges for Non-profits and Data Science.
If you’re wondering why many non-profits haven’t already hopped onto the data bandwagon, its because in most cases they lack one big thing—quality data.
One reason is that effective data application requires clean data, and heaps of it, something non-profits struggle with.
Most don’t sell products or services, and their success is reliant on broad, long-term (sometimes decades) results and changes, which means their outcomes are highly unmeasurable.
Metrics and data seem out of place when appealing to donors, who are persuaded more by emotional campaigns.
Data collection is also rare, perhaps only being recorded when someone signs up to the program or leaves, and hardly any tracking in between.
The result is data that’s too little and unreliable to make effective change.
Perhaps the most important phase, data collection relies heavily on accurate and organized processes.
For non-profits that don’t have the resources for accurate and manual record-keeping, clean, and quality data collection is a huge pain point.
However, that is an issue now easily avoidable.
For instance, avoiding duplicate files, adopting record-keeping methods like off-site and cloud storage, digital retention, and of course back-up plans—are all processes that could save non-profits time, effort, and risk.
On the other hand, poor record management has its consequences, namely on things like fund allocation, payroll, budgeting, and taxes.
It could lead to financial risk, legal trouble, and data loss — all added worries for already under-resourced non-profit organizations.
But now, as non-governmental organizations (NGOs) and non-profits catch up and invest more in data collection processes, there’s room for data science to make its impact.
A growing global movement, ‘Data For Good’ represents individuals, companies, and organizations volunteering to create or use data to help further social causes ad support non-profit organizations.
This ‘Data For Good’ movement includes tools for data work that are donated or subsidized, as well as educational programs that serve marginalized communities.
As the movement gains momentum, non-profits are seeing data seep into their structures and turn processes around.
How Can Data Do Social Good?.
With data science set to take the non-profit sector by storm, let’s look at some of the ways data can do social good:  Improving communication with donors: Knowing when to reach out to your donors is key.
In between a meeting.
You’re unlikely to see much enthusiasm.
Once they’re at home with their families.
You may see wonderful results, as pointed out in this Forbes article.
The article opines that data can help non-profits understand and communicate with their donors better.
Donor targetting: Cold calls are a hit and miss, and with data on their side, non-profits can discover and define their ideal donor and adapt their messaging to reach out to them for better results.
Improving cost efficiency: Costs are a major priority for non-profits and every penny counts.
Data can help decrease costs and streamline financial planning.
Increasing new member sign-ups and renewals: Through data, non-profits can reach out to the right people they want on-board, strengthen recruitment processes and keep track of volunteers reaching out to them for future events or recruitment drives.
Modeling and forecasting performance: With predictive modeling tools, non-profits can make data-based decisions on where they should allocate time and money for the future, rather than go on gut instinct.
Measuring return on investment: For a long time, the outcomes of social campaigns have been perceived as intangible and immeasurable—it’s hard to measure empowerment or change.
With data, non-profits can measure everything from the amount a fundraiser raised against a goal, the cost of every lead in a lead generation campaign, etc.
Streamlining operations: Finally, non-profits can use data tools to streamline their business processes internally and invest their efforts into resources that need it.
It’s true, measuring good and having social change down to a science is a long way off — but data application is a leap forward into a more efficient future for the social sector.
With mission-aligned processes, data-driven non-profits can realize their potential, redirect their focus from trivial tasks, and onto the bigger picture to drive true change.
https://data-science-blog.com/wp-content/uploads/2020/08/how-data-science-can-benefit-nonprofits-header.png        584        1596          Luke Smith          Luke Smith  2020-08-04 15:23:17 2020-08-04 15:23:17 How Data Science Can Benefit Nonprofits       Interview: Data Science in der Finanzbranche.
July 29, 20200 Comments in , , ,   by Benjamin Aunkofer     Interview mit Torsten Nahm von der DKB (Deutsche Kreditbank AG) über Data Science in der Finanzbranche Torsten Nahm ist Head of Data Science bei der DKB (Deutsche Kreditbank AG) in Berlin.
Er hat Mathematik in Bonn mit einem Schwerpunkt auf Statistik und numerischen Methoden studiert.
Er war zuvor u.a.
als Berater bei KPMG und OliverWyman tätig sowie bei dem FinTech Funding Circle, wo er das Risikomanagement für die kontinentaleuropäischen Märkte geleitet hat.
Hallo Torsten, wie bist du zu deinem aktuellen Job bei der DKB gekommen.
Die Themen Künstliche Intelligenz und maschinelles Lernen haben mich schon immer fasziniert.
Den Begriff „Data Science“ gibt es ja noch gar nicht so lange.
In meinem Studium hieß das „statistisches Lernen“, aber im Grunde ging es um das gleiche Thema: dass ein Algorithmus Muster in den Daten erkennt und dann selbstständig Entscheidungen treffen kann.
Im Rahmen meiner Tätigkeit als Berater für verschiedene Unternehmen und Banken ist mir klargeworden, an wie vielen Stellen man mit smarten Algorithmen ansetzen kann, um Prozesse und Produkte zu verbessern, Risiken zu reduzieren und das Kundenerlebnis zu verbessern.
Als die DKB jemanden gesucht hat, um dort den Bereich Data Science weiterzuentwickeln, fand ich das eine äußerst spannende Gelegenheit.
Die DKB bietet mit über 4 Millionen Kunden und einem auf Nachhaltigkeit fokussierten Geschäftsmodell m.
E.
ideale Möglichkeiten für anspruchsvolle aber auch verantwortungsvolle Data Science.
Du hast viel Erfahrung in Data Science und im Risk Management sowohl in der Banken- als auch in der Versicherungsbranche.
Welche Rolle siehst du für Big Data Analytics in der Finanz- und Versicherungsbranche.
Banken und Versicherungen waren mit die ersten Branchen, die im großen Stil Computer eingesetzt haben.
Das ist einfach ein unglaublich datengetriebenes Geschäft.
Entsprechend haben komplexe Analysemethoden und auch Big Data von Anfang an eine große Rolle gespielt – und die Bedeutung nimmt immer weiter zu.
Technologie hilft aber vor allem dabei Prozesse und Produkte für die Kundinnen und Kunden zu vereinfachen und Banking als ein intuitives, smartes Erlebnis zu gestalten – Stichwort „Die Bank in der Hosentasche“.
Hier setzen wir auf einen starken Kundenfokus und wollen die kommenden Jahre als Bank deutlich wachsen.
Kommen die Bestrebungen hin zur Digitalisierung und Nutzung von Big Data gerade eher von oben aus dem Vorstand oder aus der Unternehmensmitte, also aus den Fachbereichen, heraus.
Das ergänzt sich idealerweise.
Unser Vorstand hat sich einer starken Wachstumsstrategie verschrieben, die auf Automatisierung und datengetriebenen Prozessen beruht.
Gleichzeitig sind wir in Dialog mit vielen Bereichen der Bank, die uns fragen, wie sie ihre Produkte und Prozesse intelligenter und persönlicher gestalten können.
Was ist organisatorische Best Practice.
Finden die Analysen nur in deiner Abteilung statt oder auch in den Fachbereichen.
Ich bin ein starker Verfechter eines „Hub-and-Spoke“-Modells, d.h.
eines starken zentralen Bereichs zusammen mit dezentralen Data-Science-Teams in den einzelnen Fachbereichen.
Wir als zentraler Bereich erschließen dabei neue Technologien (wie z.
B.
die Cloud-Nutzung oder NLP-Modelle) und arbeiten dabei eng mit den dezentralen Teams zusammen.
Diese wiederum haben den Vorteil, dass sie direkt an den jeweiligen Kollegen, Daten und Anwendern dran sind.
Wie kann man sich die Arbeit bei euch in den Projekten vorstellen.
Was für Profile – neben dem Data Scientist – sind beteiligt.
Inzwischen hat im Bereich der Data Science eine deutliche Spezialisierung stattgefunden.
Wir unterscheiden grob zwischen Machine Learning Scientists, Data Engineers und Data Analysts.
Die ML Scientists bauen die eigentlichen Modelle, die Date Engineers führen die Daten zusammen und bereiten diese auf und die Data Analysts untersuchen z.
B.
Trends, Auffälligkeiten oder gehen Fehlern in den Modellen auf den Grund.
Dazu kommen noch unsere DevOps Engineers, die die Modelle in die Produktion überführen und dort betreuen.
Und natürlich haben wir in jedem Projekt noch die fachlichen Stakeholder, die mit uns die Projektziele festlegen und von fachlicher Seite unterstützen.
Und zur technischen Organisation, setzt ihr auf On-Premise oder auf Cloud-Lösungen.
Unsere komplette Data-Science-Arbeitsumgebung liegt in der Cloud.
Das vereinfacht die gemeinsame Arbeit enorm, da wir auch sehr große Datenmengen z.
B.
direkt über S3 gemeinsam bearbeiten können.
Und natürlich profitieren wir auch von der großen Flexibilität der Cloud.
Wir müssen also z.
B.
kein Spark-Cluster oder leistungsfähige Multi-GPU-Instanzen on premise vorhalten, sondern nutzen und zahlen sie nur, wenn wir sie brauchen.
Gibt es Stand heute bereits Big Data Projekte, die die Prototypenphase hinter sich gelassen haben und nun produktiv umgesetzt werden.
Ja, wir haben bereits mehrere Produkte, die die Proof-of-Concept-Phase erfolgreich hinter sich gelassen haben und nun in die Produktion umgesetzt werden.
U.a.
geht es dabei um die Automatisierung von Backend-Prozessen auf Basis einer automatischen Dokumentenerfassung und -interpretation, die Erkennung von Kundenanliegen und die Vorhersage von Prozesszeiten.
In wie weit werden unstrukturierte Daten in die Analysen einbezogen.
Das hängt ganz vom jeweiligen Produkt ab.
Tatsächlich spielen in den meisten unserer Projekte unstrukturierte Daten eine große Rolle.
Das macht die Themen natürlich anspruchsvoll aber auch besonders spannend.
Hier ist dann oft Deep Learning die Methode der Wahl.
Wie stark setzt ihr auf externe Vendors.
Und wie viel baut ihr selbst.
Wenn wir ein neues Projekt starten, schauen wir uns immer an, was für Lösungen dafür schon existieren.
Bei vielen Themen gibt es gute etablierte Lösungen und Standardtechnologien – man muss nur an OCR denken.
Kommerzielle Tools haben wir aber im Ergebnis noch fast gar nicht eingesetzt.
In vielen Bereichen ist das Open-Source-Ökosystem am weitesten fortgeschritten.
Gerade bei NLP zum Beispiel entwickelt sich der Forschungsstand rasend.
Die besten Modelle werden dann von Facebook, Google etc.
kostenlos veröffentlicht (z.
B.
BERT und Konsorten), und die Vendors von kommerziellen Lösungen sind da Jahre hinter dem Stand der Technik.
Letzte Frage: Wie hat sich die Coronakrise auf deine Tätigkeit ausgewirkt.
In der täglichen Arbeit eigentlich fast gar nicht.
Alle unsere Daten sind ja per Voraussetzung digital verfügbar und unsere Cloudumgebung genauso gut aus dem Home-Office nutzbar.
Aber das Brainstorming, gerade bei komplexen Fragestellungen des Feature Engineering und Modellarchitekturen, finde ich per Videocall dann doch deutlich zäher als vor Ort am Whiteboard.
Insofern sind wir froh, dass wir uns inzwischen auch wieder selektiv in unseren Büros treffen können.
Insgesamt hat die DKB aber schon vor Corona auf unternehmensweites Flexwork gesetzt und bietet dadurch per se flexible Arbeitsumgebungen über die IT-Bereiche hinaus.
https://data-science-blog.com/wp-content/uploads/2020/07/interview-data-science-finanz-branche.png        478        1258                    2020-07-29 08:01:43 2020-08-15 21:29:32 Interview: Data Science in der Finanzbranche       A brief history of neural nets: everything you should know before learning LSTM.
July 16, 20200 Comments in Artificial Intelligence, , , Data Science Hack, Deep Learning, Machine Learning, , Predictive Analytics, TensorFlow  by Yasuto Tamura     This series is not a college course or something on deep learning with strict deadlines for assignments, so let’s take a detour from practical stuff and take a brief look at the history of neural networks.
The history of neural networks is also a big topic, which could be so long that I had to prepare another article series.
And usually I am supposed to begin such articles with something like “The term ‘AI’ was first used by John McCarthy in Dartmouth conference 1956…” but you can find many of such texts written by people with much more experiences in this field.
Therefore I am going to write this article from my point of view, as an intern writing articles on RNN, as a movie buff, and as one of many Japanese men who spent a great deal of childhood with video games.
We are now in the third AI boom, and some researchers say this boom began in 2006.
A professor in my university said there we are now in a kind of bubble economy in machine learning/data science industry, but people used to say “Stop daydreaming” to AI researchers.
The second AI winter is partly due to vanishing/exploding gradient problem of deep learning.
And LSTM was invented as one way to tackle such problems, in 1997.
1, First AI boom.
In the first AI boom, I think people were literally “daydreaming.” Even though the applications of machine learning algorithms were limited to simple tasks like playing chess, checker, or searching route of 2d mazes, and sometimes this time is called GOFAI (Good Old Fashioned AI).
Source: https://www.youtube.com/watch?v=K-HfpsHPmvw&feature=youtu.be  Even today when someone use the term “AI” merely for tasks with neural networks, that amuses me because for me deep learning is just statistically and automatically training neural networks, which are capable of universal approximation, into some classifiers/regressors.
Actually the algorithms behind that is quite impressive, but the structure of human brains is much more complicated.
The hype of “AI” already started in this first AI boom.
Let me take an example of machine translation in this video.
In fact the research of machine translation already started in the early 1950s, and of  specific interest in the time was translation between English and Russian due to Cold War.
In the first article of this series, I said one of the most famous applications of RNN is machine translation, such as Google Translation, DeepL.
They are a type of machine translation called neural machine translation because they use neural networks, especially RNNs.
Neural machine translation was an astonishing breakthrough around 2014 in machine translation field.
The former major type of machine translation was statistical machine translation, based on statistical language models.
And the machine translator in the first AI boom was rule base machine translators, which are more primitive than statistical ones.
Source: https://news.cornell.edu/stories/2019/09/professors-perceptron-paved-way-ai-60-years-too-soon  The most remarkable invention in this time was of course perceptron by Frank Rosenblatt.
Some people say that this is the first neural network.
Even though you can implement perceptron with a-few-line codes in Python, obviously they did not have Jupyter Notebook in those days.
The perceptron was implemented as a huge instrument named Mark 1 Perceptron, and it was composed of randomly connected wires.
I do not precisely know how it works, but it was a huge effort to implement even the most primitive type of neural networks.
They needed to use a big lighting fixture to get a 20*20 pixel image using 20*20 array of cadmium sulphide photocells.
The research by Rosenblatt, however, was criticized by Marvin Minsky in his book because perceptrons could only be used for linearly separable data.
To make matters worse the criticism prevailed as that more general, multi-layer perceptrons were also not useful for linearly inseparable data (as I mentioned in the first article, multi-layer perceptrons, namely normal neural networks,  can be universal approximators, which have potentials to classify/regress various types of complex data).
In case you do not know what “linearly separable” means, imagine that there are data plotted on a piece of paper.
If an elementary school kid can draw a border line between two clusters of the data with a ruler and a pencil on the paper, the 2d data is “linearly separable”….
With big disappointments to the research on “electronic brains,” the budget of AI research was reduced and AI research entered its first winter.
Source: https://www.nzz.ch/digital/ehre-fuer-die-deep-learning-mafia-ld.1472761?reduced=true and https://anatomiesofintelligence.github.io/posts/2019-06-21-organization-mark-i-perceptron  I think  the frame problem (1969),  by John McCarthy and Patrick J.
Hayes, is also an iconic theory in the end of the first AI boom.
This theory is known as a story of creating a robot trying to pull out its battery on a wheeled wagon in a room.
But there is also a time bomb on the wagon.
The first prototype of the robot, named R1, naively tried to pull out the wagon form the room, and the bomb exploded.
The problems was obvious: R1 was not programmed to consider the risks by taking each action, so the researchers made the next prototype named R1D1, which was programmed to consider the potential risks of taking each action.
When R1D1 tried to pull out the wagon, it realized the risk of pulling the bomb together with the battery.
But soon it started considering all the potential risks, such as the risk of the ceiling falling down, the distance between the wagon and all the walls, and so on, when the bomb exploded.
The next problem was also obvious: R1D1 was not programmed to distinguish if the factors are relevant of irrelevant to the main purpose, and the next prototype R2D1 was programmed to do distinguish them.
This time, R2D1 started thinking about “whether the factor is  irrelevant to the main purpose,” on every factor measured, and again the bomb exploded.
How can we get a perfect AI, R2D2.
The situation of mentioned above is a bit extreme, but it is said AI could also get stuck when it try to take some super simple actions like finding a number in a phone book and make a phone call.
It is difficult for an artificial intelligence to decide what is relevant and what is irrelevant, but humans will not get stuck with such simple stuff, and sometimes the frame problem is counted as the most difficult and essential problem of developing AI.
But personally I think the original frame problem was unreasonable in that McCarthy, in his attempts to model the real world, was inflexible in his handling of the various equations involved, treating them all with equal weight regardless of the particular circumstances of a situation.
Some people say that McCarthy, who was an advocate for AI, also wanted to see the field come to an end, due to its failure to meet the high expectations it once aroused.
Not only the frame problem, but also many other AI-related technological/philosophical problems have been proposed, such as Chinese room (1980), the symbol grounding problem (1990), and they are thought to be as hardships in inventing artificial intelligence, but I omit those topics in this article.
*The name R2D2 did not come from the famous story of frame problem.
The story was Daniel Dennett first proposed the story of R2D2 in his paper published in 1984.
Star Wars was first released in 1977.
It is said that the name R2D2 came from “Reel 2, Dialogue 2,” which George Lucas said while film shooting.
And the design of C3PO came from Maria in Metropolis(1927).
It is said that the most famous AI duo in movie history was inspired by Tahei and Matashichi in The Hidden Fortress (1958), directed by Kurosawa Akira.
Source: https://criterioncollection.tumblr.com/post/135392444906/the-original-r2-d2-and-c-3po-the-hidden-fortress  Interestingly, in the end of the first AI boom, 2001: A Space Odyssey, directed by Stanley Kubrick, was released in 1968.
Unlike conventional fantasylike AI characters, for example Maria in Metropolis (1927), HAL 9000 was portrayed as a very realistic AI, and the movie already pointed out the risk of AI being insane when it gets some commands from several users.
HAL 9000 still has been a very iconic character in AI field.
For example when you say some quotes from 2001: A Space Odyssey to Siri you get some parody responses.
I also thin you should keep it in mind that in order to make an AI like HAL 9000 come true, for now RNNs would be indispensable in many ways: you would need RNNs for better voice recognition, better conversational system, and for reading lips.
Source: https://imgflip.com/memetemplate/34339860/Open-the-pod-bay-doors-Hal  *Just as you cannot understand Monty Python references in Python official tutorials without watching Monty Python and the Holy Grail, you cannot understand many parodies in AI contexts without watching 2001: A Space Odyssey.
Even though the movie had some interview videos with some researchers and some narrations, Stanley Kubrick cut off all the footage and made the movie very difficult to understand.
Most people did not or do not understand that it is a movie about aliens who gave homework of coming to Jupiter to human beings.
2, Second AI boom/winter.
Source: Fukushima Kunihiko, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” (1980)  I am not going to write about the second AI boom in detail, but at least you should keep it in mind that convolutional neural network (CNN) is a keyword in this time.
Neocognitron, an artificial model of how sight nerves perceive thing, was invented by Kunihiko Fukushima in 1980, and the model is said to be the origin on CNN.
And Neocognitron got inspired by the Hubel and Wiesel’s research on sight nerves.
In 1989, a group in AT & T Bell Laboratory led by Yann LeCun invented the first practical CNN to read handwritten digit.
Y.
LeCun, “Backpropagation Applied to Handwritten Zip Code Recognition,” (1989)  Another turning point in this second AI boom was that back propagation algorithm was discovered, and the CNN by LeCun was also trained with back propagation.
LeCun made a deep neural networks with some layers in 1998 for more practical uses.
But his research did not gain so much attention like today, because AI research entered its second winter at the beginning of the 1990s, and that was partly due to vanishing/exploding gradient problem of deep learning.
People knew that neural networks had potentials of universal approximation, but when they tried to train naively stacked neural nets, the gradients, which you need for training neural networks, exponentially increased/decreased.
Even though the CNN made by LeCun was the first successful case of “deep” neural nets which did not suffer from the vanishing/exploding gradient problem so much, deep learning research also stagnated in this time.
The ultimate goal of this article series is to understand LSTM at a more abstract/mathematical level because it is one of the practical RNNs, but the idea of LSTM (Long Short Term Memory) itself was already proposed in 1997 as an RNN algorithm to tackle vanishing gradient problem.
(Exploding gradient problem is solved with a technique named gradient clipping, and this is easier than techniques for preventing vanishing gradient problems.
I am also going to explain it in the next article.) After that some other techniques like introducing forget gate, peephole connections, were discovered, but basically it took some 20 years till LSTM got attentions like today.
The reasons for that is lack of hardware and data sets, and that was also major reasons for the second AI winter.
Source: Sepp HochreiterJürgen, Schmidhuber, “Long Short-term Memory,” (1997)  In the 1990s, the mid of second AI winter, the Internet started prevailing for commercial uses.
I think one of the iconic events in this time was the source codes WWW (World Wide Web) were announced in 1993.
Some of you might still remember that you little by little became able to transmit more data online in this time.
That means people came to get more and more access to various datasets in those days, which is indispensable for machine learning tasks.
After all, we could not get HAL 9000 by the end of 2001, but instead we got Xbox console.
3, Video game industry and GPU.
Even though research on neural networks stagnated in the 1990s the same period witnessed an advance in the computation of massive parallel linear transformations, due to their need in fields such as image processing.
Computer graphics move or rotate in 3d spaces, and that is also linear transformations.
When you think about a car moving in a city, it is convenient to place the car, buildings, and other objects on a fixed 3d space.
But when you need to make computer graphics of scenes of the city from a view point inside the car, you put a moving origin point in the car and see the city.
The spatial information of the city is calculated as vectors from the moving origin point.
Of course this is also linear transformations.
Of course I am not talking about a dot or simple figures moving in the 3d spaces.
Computer graphics are composed of numerous plane panels, and each of them have at least three vertexes, and they move on 3d spaces.
Depending on viewpoints, you need project the 3d graphics in 3d spaces on 2d spaces to display the graphics on devices.
You need to calculate which part of the panel is projected to which pixel on the display, and that is called rasterization.
Plus, in order to get photophotorealistic image, you need to think about how lights from light sources reflect on the panel and projected on the display.
And you also have to put some textures on groups of panels.
You might also need to change color spaces, which is also linear transformations.
My point is, in short, you really need to do numerous linear transformations in parallel in image processing.
When it comes to the use of CGI in movies,  two pioneer movies were released during this time: Jurassic Park in 1993, and Toy Story in 1995.
It is famous that Pixar used to be one of the departments in ILM (Industrial Light and Magic), founded by George Lucas, and Steve Jobs bought the department.
Even though the members in Pixar had not even made a long feature film in their lives, after trial and errors, they made the first CGI animated feature movie.
On the other hand, in order to acquire funds for the production of Schindler’s List (1993), Steven Spielberg took on Jurassic Park (1993), consequently changing the history of CGI through this “side job.” Source: http://renderstory.com/jurassic-park-23-years-later/  *I think you have realized that George Lucas is mentioned almost everywhere in this article.
His influences on technologies are not only limited to image processing, but also sound measuring system, nonlinear editing system.
Photoshop was also originally developed under his company.
I need another article series for this topic, but maybe not in Data Science Blog.
Source: https://editorial.rottentomatoes.com/article/5-technical-breakthroughs-in-star-wars-that-changed-movies-forever/  Considering that the first wire-frame computer graphics made and displayed by computers appeared in the scene of displaying the wire frame structure of Death Star in a war room, in Star Wars: A New Hope, the development of CGI was already astonishing at this time.
But I think deep learning owe its development more to video game industry.
*I said that the Death Star scene is the first use of graphics made and DISPLAYED by computers, because I have to say one of the first graphics in movie MADE by computer dates back to the legendary title sequence of Vertigo(1958).
When it comes to 3D video games the processing unit has to constantly deal with real time commands from controllers.
It is famous that GPU was originally specifically designed for plotting computer graphics.
Video game market is the biggest in entertainment industry in general, and it is said that the quality of computer graphics have the strongest correlation with video games sales, therefore enhancing this quality is a priority for the video game console manufacturers.
One good example to see how much video games developed is comparing original Final Fantasy 7 and the remake one.
The original one was released in 1997, the same year as when LSTM was invented.
And recently  the remake version of Final Fantasy 7 was finally released this year.
The original one was also made with very big budget, and it was divided into three CD-ROMs.
The original one was also very revolutionary given that the former ones of Final Fantasy franchise were all 2d video retro style video games.
But still the computer graphics looks like polygons, and in almost all scenes the camera angle was fixed in the original one.
On the other hand the remake one is very photorealistic and you can move the angle of the camera as you want while you play the video game.
There were also fierce battles by graphic processor manufacturers in computer video game market in the 1990s, but personally I think the release of Xbox console was a turning point in the development of GPU.
To be concrete, Microsoft adopted a type of NV20 GPU for Xbox consoles, and that left some room of programmability for developers.
The chief architect of NV20, which was released under the brand of GeForce3, said making major changes in the company’s graphic chips was very risky.
But that decision opened up possibilities of uses of GPU beyond computer graphics.
Source: https://de.wikipedia.org/wiki/Nvidia-GeForce-3-Serie  I think that the idea of a programmable GPU provided other scientific fields with more visible benefits after CUDA was launched.
And GPU gained its position not only in deep learning, but also many other fields including making super computers.
*When it comes to deep learning, even GPUs have strong rivals.
TPU(Tensor Processing Unit) made by Google, is specialized for deep learning tasks, and have astonishing processing speed.
And FPGA(Field Programmable Gate Array), which was originally invented customizable electronic circuit, proved to be efficient for reducing electricity consumption of deep learning tasks.
*I am not so sure about this GPU part.
Processing unit, including GPU is another big topic, that is beyond my capacity to be honest.  I would appreciate it if you could share your view and some references to confirm your opinion, on the comment section or via email.
*If you are interested you should see this video of game fans’ reactions to the announcement of Final Fantasy 7.
This is the industry which grew behind the development of deep learning, and many fields where you need parallel computations owe themselves to the nerds who spent a lot of money for video games, including me.
*But ironically the engineers who invented the GPU said they did not play video games simply because they were busy.
If you try to study the technologies behind video games, you would not have much time playing them.
That is the reality.
We have seen that the in this second AI winter, Internet and GPU laid foundation of the next AI boom.
But still the last piece of the puzzle is missing: let’s look at the breakthrough which solved the vanishing /exploding gradient problem of deep learning in the next section.
4, Pretraining of deep belief networks: “The Dawn of Deep Learning”.
Some researchers say the invention of pretraining of deep belief network by Geoffrey Hinton was a breakthrough which put an end to the last AI winter.
Deep belief networks are different type of networks from the neural networks we have discussed, but their architectures are similar to those of the neural networks.
And it was also unknown how to train deep belief nets when they have several layers.
Hinton discovered that training the networks layer by layer in advance can tackle vanishing gradient problems.
And later it was discovered that you can do pretraining neural networks layer by layer with autoencoders.
*Deep belief network is beyond the scope of this article series.
I have to talk about generative models, Boltzmann machine, and some other topics.
The pretraining techniques of neural networks is not mainstream anymore.
But I think it is very meaningful to know that major deep learning techniques such as using ReLU activation functions, optimization with Adam, dropout, batch normalization, came up as more effective algorithms for deep learning after the advent of the pretraining techniques, and now we are in the third AI boom.
In the next next article we are finally going to work on LSTM.
Specifically, I am going to offer a clearer guide to a well-made paper on LSTM, named “LSTM: A Search Space Odyssey.” * I make study materials on machine learning, sponsored by DATANOMIQ.
I do my best to make my content as straightforward but as precise as possible.
I include all of my reference sources.
If you notice any mistakes in my materials, including grammatical errors, please let me know (email: [email protected]).
And if you have any advice for making my materials more understandable to learners, I would appreciate hearing it.
[References] [1] Taniguchi Tadahiro, “An Illustrated Guide to Artificial Intelligence”, (2010), Kodansha pp.
3-11 谷口忠大 著, 「イラストで学ぶ人工知能概論」, (2010), 講談社, pp.
3-11 [2] Francois Chollet, Deep Learning with Python,(2018), Manning , pp.
14-24 [3] Oketani Takayuki, “Machine Learning Professional Series: Deep Learning,” (2015), pp.
1-5, 151-156 岡谷貴之 著, 「機械学習プロフェッショナルシリーズ 深層学習」, (2015), pp.
1-5, 151-156 [4] Abigail See, Matthew Lamm, “Natural Language Processingwith Deep LearningCS224N/Ling284 Lecture 8:Machine Translation,Sequence-to-sequence and Attention,” (2020), URL: http://web.stanford.edu/class/cs224n/slides/cs224n-2020-lecture08-nmt.pdf [5]C.
M.
Bishop, “Pattern Recognition and Machine Learning,” (2006), Springer, pp.
192-196 [6] Daniel C.
Dennett, “Cognitive Wheels: the Frame Problem of AI,” (1984), pp.
1-2 [7] Machiyama Tomohiro, “Understanding Cinemas of 1967-1979,” (2014), Yosensya, pp.
14-30 町山智浩 著, 「<映画の見方>が分かる本」,(2014), 洋泉社, pp.
14-30 [8] Harada Tatsuya, “Machine Learning Professional Series: Image Recognition,” (2017), pp.
156-157 原田達也 著, 「機械学習プロフェッショナルシリーズ 画像認識」, (2017), pp.
156-157 [9] Suyama Atsushi, “Machine Learning Professional Series: Bayesian Deep Learning,” (2019)岡谷貴之 須山敦志 著, 「機械学習プロフェッショナルシリーズ ベイズ深層学習」, (2019) [10] “Understandable LSTM ~ With the Current Trends,” Qiita, (2015) 「わかるLSTM ~ 最近の動向と共に」, Qiita, (2015) URL: https://qiita.com/t_Signull/items/21b82be280b46f467d1b [11] Hisa Ando, “WEB+DB PRESS plus series: Technologies Supporting Processors – The World Endlessly Pursuing Speed,” (2017), Gijutsu-hyoron-sya, pp 313-317 Hisa Ando, 「WEB+DB PRESS plusシリーズ プロセッサを支える技術― 果てしなくスピードを追求する世界」, (2017), 技術評論社, pp.
313-317 [12] “Takahashi Yoshiki and Utamaru discuss George Lucas,” miyearnZZ Labo, (2016) “高橋ヨシキと宇多丸 ジョージ・ルーカスを語る,” miyearnZZ Labo, (2016) URL: https://miyearnzzlabo.com/archives/38865 [13] Katherine Bourzac, “Chip Hall of Fame: Nvidia NV20 The first configurable graphics processor opened the door to a machine-learning revolution,” IEEE SPECTRUM, (2018) URL: https://spectrum.ieee.org/tech-history/silicon-revolution/chip-hall-of-fame-nvidia-nv20             https://data-science-blog.com/wp-content/uploads/2020/04/RNN_head_pic.png        802        1952          Yasuto Tamura          Yasuto Tamura  2020-07-16 06:15:09 2020-08-25 02:51:24 A brief history of neural nets: everything you should know before learning LSTM       Sechs Eigenschaften einer modernen Business Intelligence.
July 10, 20201 Comment in Artificial Intelligence, , , , , Data Science News, Data Warehousing, Database, Gerneral, , , Predictive Analytics,   by Benjamin Aunkofer     Völlig unabhängig von der Branche, in der Sie tätig sind, benötigen Sie Informationssysteme, die Ihre geschäftlichen Daten auswerten, um Ihnen Entscheidungsgrundlagen zu liefern.
Diese Systeme werden gemeinläufig als sogenannte Business Intelligence (BI) bezeichnet.
Tatsächlich leiden die meisten BI-Systeme an Mängeln, die abstellbar sind.
Darüber hinaus kann moderne BI Entscheidungen teilweise automatisieren und umfassende Analysen bei hoher Flexibilität in der Nutzung ermöglichen.
Read this article in English: “Six properties of modern Business Intelligence”  Lassen Sie uns die sechs Eigenschaften besprechen, die moderne Business Intelligence auszeichnet, die Berücksichtigungen von technischen Kniffen im Detail bedeuten, jedoch immer im Kontext einer großen Vision für die eigene Unternehmen-BI stehen: 1.      Einheitliche Datenbasis von hoher Qualität (Single Source of Truth).
Sicherlich kennt jeder Geschäftsführer die Situation, dass sich seine Manager nicht einig sind, wie viele Kosten und Umsätze tatsächlich im Detail entstehen und wie die Margen pro Kategorie genau aussehen.
Und wenn doch, stehen diese Information oft erst Monate zu spät zur Verfügung.
In jedem Unternehmen sind täglich hunderte oder gar tausende Entscheidungen auf operative Ebene zu treffen, die bei guter Informationslage in der Masse sehr viel fundierter getroffen werden können und somit Umsätze steigern und Kosten sparen.
Demgegenüber stehen jedoch viele Quellsysteme aus der unternehmensinternen IT-Systemlandschaft sowie weitere externe Datenquellen.
Die Informationsbeschaffung und -konsolidierung nimmt oft ganze Mitarbeitergruppen in Anspruch und bietet viel Raum für menschliche Fehler.
Ein System, das zumindest die relevantesten Daten zur Geschäftssteuerung zur richtigen Zeit in guter Qualität in einer Trusted Data Zone als Single Source of Truth (SPOT) zur Verfügung stellt.
SPOT ist das Kernstück moderner Business Intelligence.
Darüber hinaus dürfen auch weitere Daten über die BI verfügbar gemacht werden, die z.
B.
für qualifizierte Analysen und Data Scientists nützlich sein können.
Die besonders vertrauenswürdige Zone ist jedoch für alle Entscheider diejenige, über die sich alle Entscheider unternehmensweit synchronisieren können.
2.      Flexible Nutzung durch unterschiedliche Stakeholder.
Auch wenn alle Mitarbeiter unternehmensweit auf zentrale, vertrauenswürdige Daten zugreifen können sollen, schließt das bei einer cleveren Architektur nicht aus, dass sowohl jede Abteilung ihre eigenen Sichten auf diese Daten erhält, als auch, dass sogar jeder einzelne, hierfür qualifizierte Mitarbeiter seine eigene Sicht auf Daten erhalten und sich diese sogar selbst erstellen kann.
Viele BI-Systeme scheitern an der unternehmensweiten Akzeptanz, da bestimmte Abteilungen oder fachlich-definierte Mitarbeitergruppen aus der BI weitgehend ausgeschlossen werden.
Moderne BI-Systeme ermöglichen Sichten und die dafür notwendige Datenintegration für alle Stakeholder im Unternehmen, die auf Informationen angewiesen sind und profitieren gleichermaßen von dem SPOT-Ansatz.
3.      Effiziente Möglichkeiten zur Erweiterung (Time to Market).
Bei den Kernbenutzern eines BI-Systems stellt sich die Unzufriedenheit vor allem dann ein, wenn der Ausbau oder auch die teilweise Neugestaltung des Informationssystems einen langen Atem voraussetzt.
Historisch gewachsene, falsch ausgelegte und nicht besonders wandlungsfähige BI-Systeme beschäftigen nicht selten eine ganze Mannschaft an IT-Mitarbeitern und Tickets mit Anfragen zu Änderungswünschen.
Gute BI versteht sich als Service für die Stakeholder mit kurzer Time to Market.
Die richtige Ausgestaltung, Auswahl von Software und der Implementierung von Datenflüssen/-modellen sorgt für wesentlich kürzere Entwicklungs- und Implementierungszeiten für Verbesserungen und neue Features.
Des Weiteren ist nicht nur die Technik, sondern auch die Wahl der Organisationsform entscheidend, inklusive der Ausgestaltung der Rollen und Verantwortlichkeiten – von der technischen Systemanbindung über die Datenbereitstellung und -aufbereitung bis zur Analyse und dem Support für die Endbenutzer.
4.      Integrierte Fähigkeiten für Data Science und AI.
Business Intelligence und Data Science werden oftmals als getrennt voneinander betrachtet und geführt.
Zum einen, weil Data Scientists vielfach nur ungern mit – aus ihrer Sicht – langweiligen Datenmodellen und vorbereiteten Daten arbeiten möchten.
Und zum anderen, weil die BI in der Regel bereits als traditionelles System im Unternehmen etabliert ist, trotz der vielen Kinderkrankheiten, die BI noch heute hat.
Data Science, häufig auch als Advanced Analytics bezeichnet, befasst sich mit dem tiefen Eintauchen in Daten über explorative Statistik und Methoden des Data Mining (unüberwachtes maschinelles Lernen) sowie mit Predictive Analytics (überwachtes maschinelles Lernen).
Deep Learning ist ein Teilbereich des maschinellen Lernens (Machine Learning) und wird ebenfalls für Data Mining oder Predictvie Analytics angewendet.
Bei Machine Learning handelt es sich um einen Teilbereich der Artificial Intelligence (AI).
In der Zukunft werden BI und Data Science bzw.
AI weiter zusammenwachsen, denn spätestens nach der Inbetriebnahme fließen die Prädiktionsergebnisse und auch deren Modelle wieder in die Business Intelligence zurück.
Vermutlich wird sich die BI zur ABI (Artificial Business Intelligence) weiterentwickeln.
Jedoch schon heute setzen viele Unternehmen Data Mining und Predictive Analytics im Unternehmen ein und setzen dabei auf einheitliche oder unterschiedliche Plattformen mit oder ohne Integration zur BI.
Moderne BI-Systeme bieten dabei auch Data Scientists eine Plattform, um auf qualitativ hochwertige sowie auf granularere Rohdaten zugreifen zu können.
5.      Ausreichend hohe Performance.
Vermutlich werden die meisten Leser dieser sechs Punkte schon einmal Erfahrung mit langsamer BI gemacht haben.
So dauert das Laden eines täglich zu nutzenden Reports in vielen klassischen BI-Systemen mehrere Minuten.
Wenn sich das Laden eines Dashboards mit einer kleinen Kaffee-Pause kombinieren lässt, mag das hin und wieder für bestimmte Berichte noch hinnehmbar sein.
Spätestens jedoch bei der häufigen Nutzung sind lange Ladezeiten und unzuverlässige Reports nicht mehr hinnehmbar.
Ein Grund für mangelhafte Performance ist die Hardware, die sich unter Einsatz von Cloud-Systemen bereits beinahe linear skalierbar an höhere Datenmengen und mehr Analysekomplexität anpassen lässt.
Der Einsatz von Cloud ermöglicht auch die modulartige Trennung von Speicher und Rechenleistung von den Daten und Applikationen und ist damit grundsätzlich zu empfehlen, jedoch nicht für alle Unternehmen unbedingt die richtige Wahl und muss zur Unternehmensphilosophie passen.
Tatsächlich ist die Performance nicht nur von der Hardware abhängig, auch die richtige Auswahl an Software und die richtige Wahl der Gestaltung von Datenmodellen und Datenflüssen spielt eine noch viel entscheidender Rolle.
Denn während sich Hardware relativ einfach wechseln oder aufrüsten lässt, ist ein Wechsel der Architektur mit sehr viel mehr Aufwand und BI-Kompetenz verbunden.
Dabei zwingen unpassende Datenmodelle oder Datenflüsse ganz sicher auch die neueste Hardware in maximaler Konfiguration in die Knie.
6.      Kosteneffizienter Einsatz und Fazit.
Professionelle Cloud-Systeme, die für BI-Systeme eingesetzt werden können, bieten Gesamtkostenrechner an, beispielsweise Microsoft Azure, Amazon Web Services und Google Cloud.
Mit diesen Rechnern – unter Einweisung eines erfahrenen BI-Experten – können nicht nur Kosten für die Nutzung von Hardware abgeschätzt, sondern auch Ideen zur Kostenoptimierung kalkuliert werden.
Dennoch ist die Cloud immer noch nicht für jedes Unternehmen die richtige Lösung und klassische Kalkulationen für On-Premise-Lösungen sind notwendig und zudem besser planbar als Kosten für die Cloud.
Kosteneffizienz lässt sich übrigens auch mit einer guten Auswahl der passenden Software steigern.
Denn proprietäre Lösungen sind an unterschiedliche Lizenzmodelle gebunden und können nur über Anwendungsszenarien miteinander verglichen werden.
Davon abgesehen gibt es jedoch auch gute Open Source Lösungen, die weitgehend kostenfrei genutzt werden dürfen und für viele Anwendungsfälle ohne Abstriche einsetzbar sind.
Die Total Cost of Ownership (TCO) gehören zum BI-Management mit dazu und sollten stets im Fokus sein.
Falsch wäre es jedoch, die Kosten einer BI nur nach der Kosten für Hardware und Software zu bewerten.
Ein wesentlicher Teil der Kosteneffizienz ist komplementär mit den Aspekten für die Performance des BI-Systems, denn suboptimale Architekturen arbeiten verschwenderisch und benötigen mehr und teurere Hardware als sauber abgestimmte Architekturen.
Die Herstellung der zentralen Datenbereitstellung in adäquater Qualität kann viele unnötige Prozesse der Datenaufbereitung ersparen und viele flexible Analysemöglichkeiten auch redundante Systeme direkt unnötig machen und somit zu Einsparungen führen.
In jedem Fall ist ein BI für Unternehmen mit vielen operativen Prozessen grundsätzlich immer günstiger als kein BI zu haben.
Heutzutage könnte für ein Unternehmen nichts teurer sein, als nur nach Bauchgefühl gesteuert zu werden, denn der Markt tut es nicht und bietet sehr viel Transparenz.
Dennoch sind bestehende BI-Architekturen hin und wieder zu hinterfragen.
Bei genauerem Hinsehen mit BI-Expertise ist die Kosteneffizienz und Datentransparenz häufig möglich.
https://data-science-blog.com/wp-content/uploads/2019/12/bi-dashboard-header.png        478        1400                    2020-07-10 07:46:23 2020-07-10 08:09:12 Sechs Eigenschaften einer modernen Business Intelligence      Data Analytics and Mining for Dummies.
July 2, 20201 Comment in , , Deep Learning, Predictive Analytics, Tool Introduction  by Sharma Srishti     Data Analytics and Mining is often perceived as an extremely tricky task cut out for Data Analysts and Data Scientists having a thorough knowledge encompassing several different domains such as mathematics, statistics, computer algorithms and programming.
However, there are several tools available today that make it possible for novice programmers or people with no absolutely no algorithmic or programming expertise to carry out Data Analytics and Mining.
One such tool which is very powerful and provides a graphical user interface and an assembly of nodes for ETL: Extraction, Transformation, Loading, for modeling, data analysis and visualization without, or with only slight programming is the KNIME Analytics Platform.
KNIME, or the Konstanz Information Miner, was developed by the University of Konstanz and is now popular with a large international community of developers.
Initially KNIME was originally made for commercial use but now it is available as an open source software and has been used extensively in pharmaceutical research since 2006 and also a powerful data mining tool for the financial data sector.
It is also frequently used in the Business Intelligence (BI) sector.
KNIME as a Data Mining Tool KNIME is also one of the most well-organized tools which enables various methods of machine learning and data mining to be integrated.
It is very effective when we are pre-processing data i.e. extracting, transforming, and loading data.
KNIME has a number of good features like quick deployment and scaling efficiency.
It employs an assembly of nodes to pre-process data for analytics and visualization.
It is also used for discovering patterns among large volumes of data and transforming data into more polished/actionable information.
Some Features of KNIME:  Free and open source.
Graphical and logically designed.
Very rich in analytics capabilities.
No limitations on data size, memory usage, or functionalities.
Compatible with Windows ,OS and Linux.
Written in Java and edited with Eclipse.
A node is the smallest design unit in KNIME and each node serves a dedicated task.
KNIME contains graphical, drag-drop nodes that require no coding.
Nodes are connected with one’s output being another’s input, as a workflow.
Therefore end-to-end pipelines can be built requiring no coding effort.
This makes KNIME stand out, makes it user-friendly and make it accessible for dummies not from a computer science background.
KNIME workflow designed for graduate admission prediction  KNIME has nodes to carry out Univariate Statistics, Multivariate Statistics, Data Mining, Time Series Analysis, Image Processing, Web Analytics, Text Mining, Network Analysis and Social Media Analysis.
The KNIME node repository has a node for every functionality you can possibly think of and need while building a data mining model.
One can execute different algorithms such as clustering and classification on a dataset and visualize the results inside the framework itself.
It is a framework capable of giving insights on data and the phenomenon that the data represent.
Some commonly used KNIME node groups include:  Input-Output or I/O:  Nodes in this group retrieve data from or to write data to external files or data bases.
Data Manipulation: Used for data pre-processing tasks.
Contains nodes to filter, group, pivot, bin, normalize, aggregate, join, sample, partition, etc.
Views: This set of nodes permit users to inspect data and analysis results using multiple views.
This gives a means for truly interactive exploration of a data set.
Data Mining: In this group, there are nodes that implement certain algorithms (like K-means clustering, Decision Trees, etc.).
Comparison with other tools   The first version of the KNIME Analytics Platform was released in 2006 whereas Weka and R studio were released in 1997 and 1993 respectively.
KNIME is a proper data mining tool whereas Weka and R studio are Machine Learning tools which can also do data mining.
KNIME integrates with Weka to add machine learning algorithms to the system.
The R project adds statistical functionalities as well.
Furthermore, KNIME’s range of functions is impressive, with more than 1,000 modules and ready-made application packages.
The modules can be further expanded by additional commercial features.
0        0          Sharma Srishti          Sharma Srishti  2020-07-02 11:00:58 2020-06-30 21:04:05 Data Analytics and Mining for Dummies       Data Science für Smart Home im familiengeführten Unternehmen Miele.
June 30, 20200 Comments in Artificial Intelligence, , , , Machine Learning, , Use Cases  by Benjamin Aunkofer     Dr.
Florian Nielsen ist Principal for AI und Data Science bei Miele im Bereich Smart Home und zuständig für die Entwicklung daten-getriebener digitaler Produkte und Produkterweiterungen.
Der studierte Informatiker promovierte an der Universität Ulm zum Thema multimodale kognitive technische Systeme.
Data Science Blog: Herr Dr.
Nielsen, viele Unternehmen und Anwender reden heute schon von Smart Home, haben jedoch eher ein Remote Home.
Wie machen Sie daraus tatsächlich ein Smart Home.
Tatsächlich entspricht das auch meiner Wahrnehmung.
Die bloße Steuerung vernetzter Produkte über digitale Endgeräte macht aus einem vernetzten Produkt nicht gleich ein „smartes“.
Allerdings ist diese Remotefunktion ein notwendiges Puzzlestück in der Entwicklung von einem nicht vernetzten Produkt, über ein intelligentes, vernetztes Produkt hin zu einem Ökosystem von sich ergänzenden smarten Produkten und Services.
Vernetzte Produkte, selbst wenn sie nur aus der Ferne gesteuert werden können, erzeugen Daten und ermöglichen uns die Personalisierung, Optimierung oder gar Automatisierung von Produktfunktionen basierend auf diesen Daten voran zu treiben.
„Smart“ wird für mich ein Produkt, wenn es sich beispielsweise besser den Bedürfnissen des Nutzers anpasst oder über Assistenzfunktionen eine Arbeitserleichterung im Alltag bietet.
Data Science Blog: Smart Home wiederum ist ein großer Begriff, der weit mehr als Geräte für Küchen und Badezimmer betrifft.
Wie weit werden Sie hier ins Smart Home vordringen können.
Smart Home ist für mich schon fast ein verbrannter Begriff.
Der Nutzer assoziiert hiermit doch vor allem die Steuerung von Heizung und Rollladen.
Im Prinzip geht es doch um eine Vision in der sich smarte, vernetzte Produkt in ein kontextbasiertes Ökosystem einbetten um den jeweiligen Nutzer in seinem Alltag, nicht nur in seinem Zuhause, Mehrwert mit intelligenten Produkten und Services zu bieten.
Für uns fängt das beispielsweise nicht erst beim Starten des Kochprozesses mit Miele-Geräten an, sondern deckt potenziell die komplette „User Journey“ rund um Ernährung (z.
B.
Inspiration, Einkaufen, Vorratshaltung) und Kochen ab.
Natürlich überlegen wir verstärkt, wie Produkte und Services unser existierendes Produktportfolio ergänzen bzw.
dem Nutzer zugänglicher machen könnten, beschränken uns aber hierauf nicht.
Ein zusätzlicher für uns als Miele essenzieller Aspekt ist allerdings auch die Privatsphäre des Kunden.
Bei der Bewertung potenzieller Use-Cases spielt die Privatsphäre unserer Kunden immer eine wichtige Rolle.
Data Science Blog: Die meisten Data-Science-Abteilungen befassen sich eher mit Prozessen, z.
B.
der Qualitätsüberwachung oder Prozessoptimierung in der Produktion.
Sie jedoch nutzen Data Science als Komponente für Produkte.
Was gibt es dabei zu beachten.
Kundenbedürfnisse.
Wir glauben an nutzerorientierte Produktentwicklung und dementsprechend fängt alles bei uns bei der Identifikation von Bedürfnissen und potenziellen Lösungen hierfür an.
Meist starten wir mit „Design Thinking“ um die Themen zu identifizieren, die für den Kunden einen echten Mehrwert bieten.
Wenn dann noch Data Science Teil der abgeleiteten Lösung ist, kommen wir verstärkt ins Spiel.
Eine wesentliche Herausforderung ist, dass wir oft nicht auf der grünen Wiese starten können.
Zumindest wenn es um ein zusätzliches Produktfeature geht, das mit bestehender Gerätehardware, Vernetzungsarchitektur und der daraus resultierenden Datengrundlage zurechtkommen muss.
Zwar sind unsere neuen Produktgenerationen „Remote Update“-fähig, aber auch das hilft uns manchmal nur bedingt.
Dementsprechend ist die Antizipation von Geräteanforderungen essenziell.
Etwas besser sieht es natürlich bei Umsetzungen von cloud-basierten Use-Cases aus.
Data Science Blog: Es heißt häufig, dass Data Scientists kaum zu finden sind.
Ist Recruiting für Sie tatsächlich noch ein Thema.
Data Scientists, hier mal nicht interpretiert als Mythos „Unicorn“ oder „Full-Stack“ sind natürlich wichtig, und auch nicht leicht zu bekommen in einer Region wie Gütersloh.
Aber Engineers, egal ob Data, ML, Cloud oder Software generell, sind der viel wesentlichere Baustein für uns.
Für die Umsetzung von Ideen braucht es nun mal viel Engineering.
Es ist mittlerweile hinlänglich bekannt, dass Data Science einen zwar sehr wichtigen, aber auch kleineren Teil des daten-getriebenen Produkts ausmacht.
Mal abgesehen davon habe ich den Eindruck, dass immer mehr „Data Science“- Studiengänge aufgesetzt werden, die uns einerseits die Suche nach Personal erleichtern und andererseits ermöglichen Fachkräfte einzustellen die nicht, wie früher einen PhD haben (müssen).
Data Science Blog: Sie haben bereits einige Analysen erfolgreich in Ihre Produkte integriert.
Welche Herausforderungen mussten dabei überwunden werden.
Und welche haben Sie heute noch vor sich.
Wir sind, wie viele Data-Science-Abteilungen, noch ein relativ junger Bereich.
Bei den meisten unserer smarten Produkte und Services stecken wir momentan in der MVP-Entwicklung, deshalb gibt es einige Herausforderungen, die wir aktuell hautnah erfahren.
Dies fängt, wie oben erwähnt, bei der Berücksichtigung von bereits vorhandenen Gerätevoraussetzungen an, geht über mitunter heterogene, inkonsistente Datengrundlagen, bis hin zur Etablierung von Data-Science- Infrastruktur und Deploymentprozessen.
Aus meiner Sicht stehen zudem viele Unternehmen vor der Herausforderung die Weiterentwicklung und den Betrieb von AI bzw.
Data- Science- Produkten sicherzustellen.
Verglichen mit einem „fire-and-forget“ Mindset nach Start der Serienproduktion früherer Zeiten muss ein Umdenken stattfinden.
Daten-getriebene Produkte und Services „leben“ und müssen dementsprechend anders behandelt und umsorgt werden – mit mehr Aufwand aber auch mit der Chance „immer besser“ zu werden.
Deshalb werden wir Buzzwords wie „MLOps“ vermehrt in den üblichen Beraterlektüren finden, wenn es um die nachhaltige Generierung von Mehrwert von AI und Data Science für Unternehmen geht.
Und das zu Recht.
Data Science Blog: Data Driven Thinking wird heute sowohl von Mitarbeitern in den Fachbereichen als auch vom Management verlangt.
Gerade für ein Traditionsunternehmen wie Miele sicherlich eine Herausforderung.
Wie könnten Sie diese Denkweise im Unternehmen fördern.
Data Driven Thinking kann nur etabliert werden, wenn überhaupt der Zugriff auf Daten und darauf aufbauende Analysen gegeben ist.
Deshalb ist Daten-Demokratisierung der wichtigste erste Schritt.
Aus meiner Perspektive geht es darum initial die Potenziale aufzuzeigen, um dann mithilfe von Daten Unsicherheiten zu reduzieren.
Wir haben die Erfahrung gemacht, dass viele Fachbereiche echtes Interesse an einer daten-getriebenen Analyse ihrer Hypothesen haben und dankbar für eine daten-getriebene Unterstützung sind.
Miele war und ist ein sehr innovatives Unternehmen, dass „immer besser“ werden will.
Deshalb erfahren wir momentan große Unterstützung von ganz oben und sind sehr positiv gestimmt.
Wir denken, dass ein Schritt in die richtige Richtung bereits getan ist und mit zunehmender Zahl an Multiplikatoren ein „Data Driven Thinking“ sich im gesamten Unternehmen etablieren kann.
https://data-science-blog.com/wp-content/uploads/2020/06/interview-miele-header-1.png        476        1258                    2020-06-30 09:35:25 2020-06-22 21:42:17 Data Science für Smart Home im familiengeführten Unternehmen Miele       Simple RNN: the first foothold for understanding LSTM.
June 17, 20200 Comments in Artificial Intelligence, , Deep Learning, Machine Learning, , Mathematics  by Yasuto Tamura     *In this article “Densely Connected Layers” is written as “DCL,” and “Convolutional Neural Network” as “CNN.”  In the last article, I mentioned “When it comes to the structure of RNN.

Many study materials try to avoid showing that RNNs are also connections of neurons

as well as DCL or CNN.” Even if you manage to understand DCL and CNN, you can be suddenly left behind once you try to understand RNN because it looks like a different field.
In the second section of this article, I am going to provide a some helps for more abstract understandings of DCL/CNN , which you need when you read most other study materials.
My explanation on this simple RNN is based on a chapter in a textbook published by Massachusetts Institute of Technology, which is also recommended in some deep learning courses of Stanford University.
First of all, you should keep it in mind that simple RNN are not useful in many cases, mainly because of vanishing/exploding gradient problem, which I am going to explain in the next article.

LSTM is one major type of RNN used for tackling those problems

But without clear understanding forward/back propagation of RNN, I think many people would get stuck when they try to understand how LSTM works, especially during its back propagation stage.
If you have tried climbing the mountain of understanding LSTM, but found yourself having to retreat back to the foot, .

I suggest that you read through this article on simple RNNs

It should help you to gain a solid foothold, and you would be ready for trying to climb the mountain again.
*This article is the second article of “A gentle introduction to the tiresome part of understanding RNN.”  1, A brief review on back propagation of DCL.

Simple RNNs are straightforward applications of DCL

but if you do not even have any ideas on DCL forward/back propagation, you will not be able to understand this article.
If you more or less understand how back propagation of DCL works, you can skip this first section.
Deep learning is a part of machine learning.
And most importantly, whether it is classical machine learning or deep learning, adjusting parameters is what machine learning is all about.
Parameters mean elements of functions except for variants.
For example when you get a very simple function , then  is a variant, and  are parameters.
In case of classical machine learning algorithms, the number of those parameters are very limited because they were originally designed manually.
Such functions for classical machine learning is useful for features found by humans, after trial and errors(feature engineering is a field of finding such effective features, manually).
You adjust those parameters based on how different the outputs(estimated outcome of classification/regression) are from supervising vectors(the data prepared to show ideal answers).
In the last article I said neural networks are just mappings, whose inputs are vectors, matrices, or sequence data.
In case of DCLs, inputs are vectors.
Then what’s the number of parameters.
The answer depends on the the number of neurons and layers.
In the example of DCL at the right side, the number of the connections of the neurons is the number of parameters(Would you like to try to count them.
At least I would say “No.”).
Unlike classical machine learning you no longer need to do feature engineering, but instead you need to design networks effective for each task and adjust a lot of parameters.
*I think the hype of AI comes from the fact that neural networks find features automatically.
But the reality is difficulty of feature engineering was just replaced by difficulty of designing proper neural networks.
It is easy to imagine that you need an efficient way to adjust those parameters, and the method is called back propagation (or just backprop).
As long as it is about DCL backprop, you can find a lot of well-made study materials on that, so I am not going to cover that topic precisely in this article series.
Simply putting, during back propagation, in order to adjust parameters of a layer you need errors in the next layer.
And in order calculate the errors of the next layer, you need errors in the next next layer.
*You should not think too much about what the “errors” exactly mean.
Such “errors” are defined in this context, and you will see why you need them if you actually write down all the mathematical equations behind backprops of DCL.
The red arrows in the figure shows how errors of all the neurons in a layer propagate backward to a neuron in last layer.
The figure shows only some sets of such errors propagating backward, but in practice you have to think about all the combinations of such red arrows in the whole back propagation(this link would give you some ideas on how DCLs work).
These points are minimum prerequisites for continuing reading this  RNN this article.
But if you are planning to understand RNN forward/back propagation at  an abstract/mathematical level that you can read academic papers,  I highly recommend you to actually write down all the equations of DCL backprop.
And if possible you should try to implement backprop of three-layer DCL.
2, Forward propagation of simple RNN.
*For better understandings of the second and third section, I recommend you to download an animated PowerPoint slide which I prepared.

It should help you understand simple RNNs

In fact the simple RNN which we are going to look at in this article has only three layers.
From now on imagine that inputs of RNN come from the bottom and outputs go up.
But RNNs have to keep information of earlier times steps during upcoming several time steps because as I mentioned in the last article RNNs are used for sequence data, the order of whose elements is important.
In order to do that, information of the neurons in the middle layer of RNN propagate forward to the middle layer itself.
Therefore in one time step of forward propagation of RNN, the input at the time step propagates forward as normal DCL, and the RNN gives out an output at the time step.
And information of one neuron in the middle layer propagate forward to the other neurons like yellow arrows in the figure.
And the information in the next neuron propagate forward to the other neurons, and this process is repeated.
This is called recurrent connections of RNN.
*To be exact we are just looking at a type of recurrent connections.

For example Elman RNNs have simpler recurrent connections

And recurrent connections of LSTM are more complicated.
Whether it is a simple one or not, basically RNN repeats this process of getting an input at every time step, giving out an output, and making recurrent connections to the RNN itself.
But you need to keep the values of activated neurons at every time step, so virtually you need to consider the same RNNs duplicated for several time steps like the figure below.
This is the idea of unfolding RNN.
Depending on contexts, .

The whole unfolded DCLs with recurrent connections is also called an RNN

In many situations, RNNs are simplified as below.
If you have read through this article until this point.

I bet you gained some better understanding of RNNs

so you should little by little get used to this more abstract, blackboxed  way of showing RNN.
You have seen that you can unfold an RNN, per time step.
From now on I am going to show the simple RNN in a simpler way,  based on the MIT textbook which I recomment.
The figure below shows how RNN propagate forward during two time steps.
The input at time step propagate forward as a normal DCL, and gives out the output  (The notation on the  is called “hat,” and it means that the value is an estimated value.
Whatever machine learning tasks you work on, the outputs of the functions are just estimations of ideal outcomes.
You need to adjust parameters for better estimations.
You should always be careful whether it is an actual value or an estimated value in the context of machine learning or statistics).
But the most important parts are the middle layers.
*To be exact I should have drawn the middle layers as connections of two layers of neurons like the figure at the right side.
But I made my figure closer to the chart in the MIT textbook, and also most other study materials show the combinations of the two neurons before/after activation as one neuron.
is just linear summations of  (If you do not know what “linear summations” mean, please scroll this page a bit), and  is a combination of activated values of  and linear summations of  from the last time step, with recurrent connections.
The values of  propagate forward in two ways.
One is normal DCL forward propagation to  and , and the other is recurrent connections to.
These are equations for each step of forward propagation.
*Please forgive me for adding some mathematical equations on this article even though I pledged not to in the first article.
You can skip the them, but for some people it is on the contrary more confusing if there are no equations.
In case you are allergic to mathematics, I prescribed some treatments below.
*Linear summation is a type of weighted summation of some elements.
Concretely, when you have a vector , and weights , then  is a linear summation of , and its weights are.
*When you see a product of a matrix and a vector, for example a product of  and , you should clearly make an image of connections between two layers of a neural network.
You can also say each element of  is a linear summations all the elements of  , and  gives the weights for the summations.
A very important point is that you share the same parameters, in this case , at every time step.   And you are likely to see this RNN in this blackboxed form.
3, The steps of back propagation of simple RNN.
In the last article, I said “I have to say backprop of RNN, especially LSTM (a useful and mainstream type or RNN), is a monster of chain rules.” I did my best to make my PowerPoint on LSTM backprop straightforward.
But looking at it again, the LSTM backprop part still looks like an electronic circuit, and it requires some patience from you to understand it.
If you want to understand LSTM at a more mathematical level, understanding the flow of simple RNN backprop is indispensable, so I would like you to be patient while understanding this step (and you have to be even more patient while understanding LSTM backprop).
This might be a matter of my literacy, but explanations on RNN backprop are very frustrating for me in the points below.
Most explanations just show how to calculate gradients at each time step.
Most study materials are visually very poor.
Most explanations just emphasize that “errors are back propagating through time,” using tons of arrows, but they lack concrete instructions on how actually you renew parameters with those errors.
If you can relate to the feelings I mentioned above, the instructions from now on could somewhat help you.
And with the animated PowerPoint slide I prepared, you would have clear understandings on this topic at a more mathematical level.
Backprop of RNN , as long as you are thinking about simple RNNs, is not so different from that of DCLs.
But you have to be careful about the meaning of errors in the context of RNN backprop.
Back propagation through time (BPTT) is one of the major methods for RNN backprop, and I am sure most textbooks explain BPTT.
But most study materials just emphasize that you need errors from all the time steps, and I think that is very misleading and confusing.
You need all the gradients to adjust parameters, but you do not necessarily need all the errors to calculate those gradients.
Gradients in the context of machine learning mean partial derivatives of error functions (in this case ) with respect to certain parameters, and mathematically a gradient of  with respect to is denoted as.
And another confusing point in many textbooks, including the MIT one, is that they give an impression that parameters depend on time steps.
For example some study materials use notations like , and I think this gives an impression that this is a gradient with respect to the parameters at time step.
In my opinion this gradient rather should be written as.
But many study materials denote gradients of those errors in the former way, so from now on let me use the notations which you can see in the figures in this article.
In order to calculate the gradient  you need errors from time steps  (as you can see in the figure, in order to calculate a gradient in a colored frame, you need all the errors in the same color).
*To be exact, in the figure above I am supposed prepare much more arrows in  different colors  to show the whole process of RNN backprop, but that is not realistic.
In the figure I displayed only the flows of errors necessary for calculating each gradient at time step.
*Another confusing point is that the  are correct notations, because  are values of neurons after forward propagation.
They depend on time steps, and these are very values which I have been calling “errors.” That is why parameters do not depend on time steps, whereas errors depend on time steps.
As I mentioned before, you share the same parameters at every time step.
Again, please do not assume that parameters are different from time step to time step.
It is gradients/errors (you need errors to calculate gradients) which depend on time step.
And after calculating errors at every time step, you can finally adjust parameters one time, and that’s why this is called “back propagation through time.” (It is easy to imagine that this method can be very inefficient.
If the input is the whole text on a Wikipedia link, you need to input all the sentences in the Wikipedia text to renew parameters one time.
To solve this problem there is a backprop method named “truncated BPTT,” with which you renew parameters based on a part of a text.
) And after calculating those gradients  you can take a summation of them:.
With this gradient  , you can finally renew the value of  one time.
At the beginning of this article I mentioned that simple RNNs are no longer for practical uses, and that comes from exploding/vanishing problem of RNN.
This problem was one of the reasons for the AI winter which lasted for some 20 years.
In the next article I am going to write about LSTM, a fancier type of RNN, in the context of a history of neural network history.
* I make study materials on machine learning, sponsored by DATANOMIQ.
I do my best to make my content as straightforward but as precise as possible.
I include all of my reference sources.
If you notice any mistakes in my materials, including grammatical errors, please let me know (email: [email protected]).
And if you have any advice for making my materials more understandable to learners, I would appreciate hearing it.
https://data-science-blog.com/wp-content/uploads/2020/04/RNN_head_pic.png        802        1952          Yasuto Tamura          Yasuto Tamura  2020-06-17 09:58:51 2020-06-19 13:41:25 Simple RNN: the first foothold for understanding LSTM       Severity of lockdowns and how they are reflected in mobility data.
June 16, 20200 Comments in , Data Science News,   by Emilia Cheladze     The global spread of the SARS-CoV-2 at the beginning of March 2020 forced majority of countries to introduce measures to contain the virus.
The governments found themselves facing a very difficult tradeoff between limiting the spread of the virus and bearing potentially catastrophic economical costs of a lockdown.
Notably, considering the level of globalization today, the response of countries varied a lot in severity and response latency.
In the overwhelming amount of media and social media information feed a lot of misinformation and anecdotal evidence surfaced and remained in people’s mind.
In this article, I try to have a more systematic view on the topics of severity of response from governments and change in people’s mobility due to the pandemic.
I want to look at several countries with different approach to restraining the spread of the virus.
I will look at governmental regulations, when, and how they were introduced.
For that I am referring to an index called Oxford COVID-19 Government Response Tracker (OxCGRT)[1].
The OxCGRT follows, records, and rates the actions taken by governments, that are available publicly.
However, looking just at the regulations and taking them for granted does not provide that we have the whole picture.
Therefore, equally interesting is the investigation of how the recommended levels of self-isolation and social distancing is reflected in the mobility data and we will look at it first.
The mobility dataset.
The mobility data used in this article was collected by Google and made freely accessible[2].
The data reflects how the number of visits and their length changed as compared to a baseline from before the pandemic.
The baseline is the median value for the corresponding day of the week in the period from 3.01.2020 – 6.02.2020.
The dataset contains data in six categories.
Here we look at only 4 of them: public transport stations, places of residence, workplaces, and retail/recreation (including shopping centers, libraries, gastronomy, culture).
The analysis intentionally omits parks (public beaches, gardens etc.) and grocery/pharmacy category.
Mobility in parks is excluded due to huge weather change confound.
The baseline was created in winter and increased/decreased (depending on the hemisphere) activity in parks is expected as the weather changes.
It would be difficult to detangle tis change from the change caused by the pandemic without referring to a different baseline.
The grocery shops and pharmacies are excluded because the measures regarding the shopping were very similar across the countries.
Amid the Covid-19 pandemic a lot of anecdotal information surfaced, that some countries, like Sweden, acted completely against the current by not introducing a lockdown.
It was reported that there were absolutely no restrictions and Sweden can be basically treated as a control group for comparing the different approaches to lockdown on the spread of the coronavirus.
Looking at the mobility data (below), we can see however, that there was a change in the mobility of Swedish citizens in comparison to the baseline.
Fig.
1 Moving average (+/- 6 days) of the mobility data in Sweden in four categories.
Looking at the change in mobility in Sweden, we can see that the change in the residential areas is small, but it is indicating some change in behavior.
A change in the retail and recreational sector is more noticeable.
Most interestingly it is approaching the baseline levels at the beginning of June.
The most substantial changes, however, are in the workplaces and transit categories.
They are also much slower to come back to the baseline, although a trend in that direction starts to be visible.
Next, let us have a look at the change in mobility in selected countries, separately for each category.
Here, I compare Germany, Sweden, Italy, and New Zealand.
(To see the mobility data for other countries visit https://covid19.datanomiq.de/#section-mobility).
Fig.
2 Moving average (+/- 6 days) of the mobility data.
Looking at the data, we can see that the change in mobility in Germany and Sweden was somewhat similar in orders of magnitude, in comparison to changes in mobility in countries like Italy and New Zealand.
Without a doubt, the behavior in Sweden changed the least from the baseline in all the categories.
Nevertheless, claiming that people’s reaction to the pandemic in Sweden in Germany were polar opposites is not necessarily correct.
The biggest discrepancy between Sweden and Germany is in the retail and recreation sector out of all categories presented.
The changes in Italy and New Zealand reached very comparable levels, but in New Zealand they seem to be much more dynamic, especially in approaching the baseline levels again.
The government response dataset.
Oxford COVID-19 Government Response Tracker records regulations from number of countries, rates them and categorizes into a few indices.
The number between 1 and 100 reflects the level of the action taken by a government.
Here, I focus on the Containment and Health sub-index that includes 11 indicators from categories: containment and closure policies and health system policies[3].
The actions included in the index are for example: school and workplace closing, restrictions on public events, travel restrictions, public information campaigns, testing policy and contact tracing.
Below, we look at a plot with the Containment and Health sub-index value for the four aforementioned countries.
Data and documentation is available here[4] Fig.
3 Oxford COVID-19 Government Response Tracker, the Containment and Health sub-index.
Here the difference between Sweden and the other countries that we are looking at becomes more apparent.
Nevertheless, the Swedish government did take some measures in order to condemn the spread of the SARS-CoV-2.
At the highest, the index reached value 45 points in Sweden, 73 in Germany, 92 in Italy and 94 in New Zealand.
In all these countries except for Sweden the index started dropping again, while the drop is the most dynamic in New Zealand and the index has basically reached the level of Sweden.
Conclusions.
As we have hopefully seen, the response to the COVID-19 pandemic from governments differed substantially, as well as the resulting change in mobility behavior of the inhabitants did.
However, the discrepancies were probably not as big as reported in the media.
The overwhelming presence of the social media could have blown some of the mentioned differences out of proportion.
For example, the discrepancy in the mobility behavior between Sweden and Germany was biggest in recreation sector, that involves cafes, restaurants, cultural resorts, and shopping centers.
It is possible, that those activities were the ones that people in lockdown missed the most.
Looking at Swedes, who were participating in them it was easy to extrapolate on the overall landscape of the response to the virus in the country.
It is very hard to say which of the world country’s approach will bring the best effects for the people’s well-being and the economies.
The ongoing pandemic will remain a topic of extensive research for many years to come.
We will (most probably) eventually find out which approach to the lockdown was the most optimal (or at least come close to finding out).
For the time being, it is however important to remember that there are many factors in play and looking into one type of data might be misleading.
Comparing countries with different history, weather, political and economic climate, or population density might be misleading as well.
But it is still more insightful than not looking into the data at all.
[1] Hale, Thomas, Sam Webster, Anna Petherick, Toby Phillips, and Beatriz Kira (2020).
Oxford COVID-19 Government Response Tracker, Blavatnik School of Government.
Data use policy: Creative Commons Attribution CC BY standard.
[2] Google LLC “Google COVID-19 Community Mobility Reports”.
https://www.google.com/covid19/mobility/ retrived: 04.06.2020 [3] See documentation https://github.com/OxCGRT/covid-policy-tracker/tree/master/documentation [4] https://github.com/OxCGRT/covid-policy-tracker  retrieved on 04.06.2020             https://data-science-blog.com/wp-content/uploads/2020/06/severity-of-lockdowns-covid19-data-analysis.png        599        1600          Emilia Cheladze          Emilia Cheladze  2020-06-16 09:12:54 2020-06-13 09:13:11 Severity of lockdowns and how they are reflected in mobility data   Page 1 of 18 1 23›».
Data Science.

Offline

Board footer

Powered by FluxBB