Sidharth recent progress can a Computer and Human survive

Sidharth Viswanathan
Testing the Turing Test in late 2017

Testing the Turing Test in late 2017

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Abstract:

Alan Turing once famously predicted in 1950 “Can machine think?”, The phras-
es ‘machines’ and ‘think’ is defined by the problem. This aims to test ‘Turing Test’ in
2017, after seven decades after Alan Turing proposed the question. Amazing progress
in machine-learning, largely based on Processing Hardware. While GPUs have been a
major driver of this recent progress can a Computer and Human survive in Turing Test.
Later this test can also be extended for validating the day to day applications like
checking Chatbots versatility . This research helps to see the progress of the hardware
software improvements from the earlier tests to 2017, that is in verifying the hardware
of the machine in which tests is done. Also when we are Proving Turing Test wrong in
certain situations, it can be concluded that there is a flaw in the system design or there
is mis-conception of ideas for the system built . Additionally, architectural defects in
Human-Computer Interactions, are uncovered.

Overview of Today’s Hardware:

There is a remarkable convergence of trends in applications, machine-learning
and hardware, which increases opportunity for major hardware and software changes.
With the progress of Machine Learning in within seven decades, can Turing Test be
successful? Also, this paper aims to find will human get defeated by a Computer?

Overview Of Machine Learning Hardware:

Neural networks research has been controversial, going through a typical hype
curve. First Neural Network Algorithms developed was a brain-inspired algorithm, but
unfortunately capabilities were very limited, only capable of doing linear classification,
though the multi- layer perceptron was later shown to be capable of non-linear classifi-
cation. Towards at end of 1980s and the beginning of the 1990s, such neural networks
became fairly more efficient, even leading to hardware neural network accelerators,
such as the ETANN from Intel. However, at the time, hardware neural networks had
these three limitations; (1) the application scope of neural networks was fairly restrict-
ed; (2) the clock frequency of processors was increasing fast enough that an accelera-
tor could be outperformed by a software neural network run on a processor after a few
technology generations; (3) competitive machine-learning algorithms emerged, espe-
cially Support Vector Machines (SVM).Also, more than that, Cybenko’s theorem 2,
stipulated that a neural network which is a single hidden layer could approximate any
continuous function with infinite precision, also suggests that deeper and larger neural
networks would bring less valuable returns. This combination of these factors created
the condition for the temporary eliminates of neural networks.

Still, researchers such as Yoshua Bengio Geoffrey Hinton 5 or Yann LeCun 6
kept pushing the notion of neural networks in the community, and around 2006, neural
network models with large and wide layers (and at the time also combined with auto-
encoders), i.e., so- called Deep Neural Networks (or DNNs), were shown to achieve
competitive results on some applications with respect to state-of-the-art machine-
learning techniques. As GPUs started to allow the training of larger neural networks, on
larger training sets, the performance of these deep neural networks kept increasing,
and they have now consistently been shown to achieve state-of-the-art performance on
a broad range of applications.

Hardware Using Machine Learning 1

Hardware using Machine Learning:

There is a remarkable convergence of trends in applications, machine-learning
and hardware, which creates opportunities for hardware machine-learning acceleration.
We now know that machine-learning has become ubiquitous in a broad range of Cloud
services and embedded devices. Simultaneously, as we mentioned in the previous sec-
tion, deep neural networks have become the state-of-the-art machine-learning algo-
rithms. And roughly at the same time, technology constraints have started to progres-
sively initiate a shift towards heterogeneous architectures and the notion of hardware
accelerators.

Hardware Neural Network Accelerators1

The idea behind digital computers may be explained 7 by saying that these
machines are intended to carry out any operations which could be done by a human
computer. The human computer is supposed to be following fixed rules; he has no au-
thority to deviate from them in any detail. We may suppose that these rules are supplied
in a book, which is altered whenever he is put on to a new job. He has also an unlimited
supply of paper on which he does his calculations. He may also do his multiplications
and additions on a “desk machine,” but this is not important. If we use the above ex-
planation as a definition we shall be in danger of circularity of argument. We avoid this

Working of a Digital Computer then and now :

by giving an outline. of the means by which the desired effect is achieved. A digital
computer can usually be regarded as consisting of three parts: (i) Store. (ii) Executive
unit. (iii) Control. The store is a store of information, and corresponds to the human
computer’s paper, whether this is the paper on which he does his calculations or that
on which his book of rules is printed. In so far as the human computer does calcula-
tions in his bead a part of the store will correspond to his memory. The executive unit is
the part which carries out the various individual operations involved in a calculation.
What these individual operations are will vary from machine to machine. Usually fairly
lengthy operations can be done such as “Multiply 3540675445 by 7076345687” but in
some machines only very simple ones such as “Write down 0” are possible. We have
mentioned that the “book of rules” supplied to the computer is replaced in the machine
by a part of the store. It is then called the “table of instructions.” It is the duty of the
control to see that these instructions are obeyed correctly and in the right order. The
control is so constructed that this necessarily happens. The information in the store is
usually broken up into packets of moderately small size. In one machine, for instance, a
packet might consist of ten decimal digits. Numbers are assigned to the parts of the
store in which the various packets of information are stored, in some systematic man-
ner. A typical instruction might say”Add the number stored in position 6809 to that in
4302 and put the result back into the latter storage position.” Needless to say it would
not occur in the machine expressed in English. It would more likely be coded in a form
such as 6809430217. Here 17 says which of various possible operations is to be per-
formed on the two numbers. In this case the)e operation is that described above, viz.,
“Add the number…” It will be noticed that the instruction takes up 10 digits and so
forms one packet of information, very conveniently. The control will normally take the
instructions to be obeyed in the order of the positions in which they are stored, but oc-

casionally an instruction such as “Now obey the instruction stored in position 5606, and
continue from there” may be encountered, or again “If position 4505 contains 0 obey
next the instruction stored in 6707, otherwise continue straight on.” Instructions of
these latter types are very important because they make it possible for a sequence of
operations to be replaced over and over again until some condition is fulfilled, but in
doing so to obey, not fresh instructions on each repetition, but the same ones over and
over again. To take a domestic analogy. Suppose Mother wants Tommy to call at the
cobbler’s every morning on his way to school to see if her shoes are done, she can ask
him afresh every morning. Alternatively she can stick up a notice once and for all in the
hall which he will see when he leaves for school and which tells him to call for the
shoes, and also to destroy the notice when he comes back if he has the shoes with
him. The reader must accept it as a fact that digital computers can be constructed, and
indeed have been constructed, according to the principles we have described, and that
they can in fact mimic the actions of a human computer very closely. The book of rules
which we have described our human computer as using is of course a convenient fic-
tion. Actual human computers really remember what they have got to do. If one wants
to make a machine mimic the behaviour of the human computer in some complex op-
eration one has to ask him how it is done, and then translate the answer into the form
of an instruction table. Constructing instruction tables is usually described as “pro-
gramming.” To “programme a machine to carry out the operation A” means to put the
appropriate instruction table into the machine so that it will do A. An interesting variant
on the idea of a digital computer is a “digital computer with a random element.” These
have instructions involving the throwing of a die or some equivalent electronic process;
one such instruction might for instance be, “Throw the die and put the-resulting number
into store 1000.” Sometimes such a machine is described as having free will (though I

would not use this phrase myself), It is not normally possible to determine from observ-
ing a machine whether it has a random element, for a similar effect can be produced by
such devices as making the choices depend on the digits of the decimal for . Most ac-
tual digital computers have only a finite store. There is no theoretical difficulty in the
idea of a computer with an unlimited store. Of course only a finite part can have been
used at any one time. Likewise only a finite amount can have been constructed, but we
can imagine more and more being added as required. Such computers have special
theoretical interest and will be called infinitive capacity computers. The idea of a digital
computer is an old one. Charles Babbage, Lucasian Professor of Mathematics at Cam-
bridge from 1828 to 1839, planned such a machine, called the Analytical Engine, but it
was never completed. Although Babbage had all the essential ideas, his machine was
not at that time such a very attractive prospect. The speed which would have been
available would be definitely faster than a human computer but something like I00 times
slower than the Manchester machine, itself one of the slower of the modern machines,
The storage was to be purely mechanical, using wheels and cards. The fact that Bab-
bage’s Analytical Engine was to be entirely mechanical will help us to rid ourselves of a
superstition. Importance is often attached to the fact that modern digital computers are
electrical, and that the nervous system also is electrical. Since Babbage’s machine was
not electrical, and since all digital computers are in a sense equivalent, we see that this
use of electricity cannot be of theoretical importance. Of course electricity usually
comes in where fast signalling is concerned, so that it is not surprising that we find it in
both these connections. In the nervous system chemical phenomena are at least as im-
portant as electrical. In certain computers the storage system is mainly acoustic. The
feature of using electricity is thus seen to be only a very superficial similarity. If we wish
to find such similarities we should took rather for mathematical analogies of function.

Modern Machine:

We can’t say the computer works different, but today 8 at the 21st Century,
most efficient, cool, profound Machines tend to a tomorrow’s recycle metals. Comput-
ers generally have an ability to capture a symbolic representation of spoken language
for long-term storage freed information from the limits of individual memory. In today
the technology is ubiquitous in industrialised countries. Not only do books, magazines
and newspapers convey written information, but so do street signs, billboards, shop
signs and even graffiti. A candy is covered with writing. The background presence of
these products of “literacy technology” does not require active attention, but the infor-
mation to be conveyed is ready for use at a abstract level.

Silicon-based information technology in today, in contrast, is far from having be-
come part of the environment. More than 100 million personal computers have been
sold, and nonetheless the computer remain largely in a world of its own. The state of
the art is perhaps analogous to the period when scribes had to know as much about
making ink or baking clay as they did about writing.

A computer today, runs billion instructions per second, store billions of million
bits of data, fetch and store in symphony etc. In the age of silicon, number of transis-
tors keeps doubling every three years.

Testing Turing Test ; Human Vs Sophisticated Machine:

The key part of the journal Finding whether the hardware making use of
these technologies, can still survive that is “Can the human be indistinguishable from

a computer that uses powerful Machine Learning, high powered Hardware?”.
When a users ask a question

Will X please tell me the length of his or her hair ? 8
Q: Please write me a sonnet on the subject of the Forth Bridge.

A : Count me out on this one . I never could write poetry.

Q: Add 34957 to 70764.
A: (Pause about 30 seconds and then give as answer) 105621.

Q: Do you play chess?
A: Yes.

Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It
is your move. What do you play?

A: (After a pause of 15 seconds) R-R8 mate.

These questions were tested with a low powered ‘not so efficient’ systems and a
human when tested few decades ago.

Say, when the user ask to perform any complex operation, the computer may
either respond with either right or wrong answer. Also, with Machine Learning, the
computer can understand that, test is done and so the computer might also deliberate-
ly say wrong answer

Can a tester Find, whether computer or Human is answering these ques-
tion?

1. Calculation of Response Time of Computer:

Can you play
chess?

Tester

No

Speed of a computer or modern computer is unmatchable with the speed of the
human. A computer can compute 88% faster than a human does. So, human perfor-
mance will be lesser. So the Turing Test fails if the computer doesn’t try to trick human
deliberately by applying Smart Machine Learning Processes. If Machine today deliber-
ately uses Machine Learning to trick the users then the Turing Test works.

2. Asking to throw a object, or asking them one of the codeword will make the
Turing Test fail in certain circumstances. A computer can do everything but not make a
physical motion without any external interfaces. This is the major drawback of Today’s
computer even with more sophisticated computers today. No computer today can do
some physical motion on their own. So when a Tester try asking “Can you throw that”

computer

Yes!

will certainly fail the Turing Test, unless both Computer and Human can say “They can-
not do that” and rejecting it.

Inferences from Testing Turing Test in 2017:

Turing Test works in today’s Computer if and only if the Machine Learning Algo-
rithms are applied and the Computer can learn about the Human who is beside and the
Tester. As the Machines are too powerful today when compared to 1950’s where the
tests were made initially, unlike 1950’s Computers have grown way better than Human
does, and Turing Test fails in this point, as Human can’t match the processing time tak-
en by a Modern Sophisticated Multicore Machines. Turing Test passes only if Machine
Learning Algorithms is applied alongside.

Calculation:

Finally, a Algorithm for Machine Learning works, Software Architecture is correct
only if the Turing Test is satisfied. This can be concluded that, Turing Test is used for
verifying the Correctness of the Algorithm and validating the Software Architecture.

Future Works:

This work might be extended to testing Turing Test on Chatbots on Mobile
Phones, since mobile is not as smarter as a Multicore Modern PC.

References:

1. Olivier Temam, Enabling Future Progress in Machine-Learning, Google=

2. G. Cybenko. (1989) Approximations by superpositions of sigmoidal functions,
Mathematics of Control, Signals, and Systems, 2 (4), 303-314.

3. Y. LeCun, L. Bottou, Y. Bengio, P Haffner. (1988) Gradient-based learning ap-
plied to document recognition – Proceedings of the IEEE.

4. Y. Bengio, P. Lamblin, D Popovici, H. Larochelle. (2007) Greedy layer-wise
training of deep networks – Advances in neural information processing systems.

5. T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, O. Temam. (2014) DianNao:
A Small-Footprint High- Throughput Accelerator for Ubiquitous Machine- Learning, In-
ternational Conference on Architectural Support for Programming Languages and Op-
erating Systems (ASPLOS).

6. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. (1986) Learning internal rep-
resentations by error propagation. In Parallel distributed processing: explorations in the
microstructure of cognition, vol. 1.

7. COMPUTING MACHINERY AND INTELLIGENCE (1950) BY A. M. Turing
8. The Computer for the 21st Century by Mark Weiser

x

Hi!
I'm Santiago!

Would you like to get a custom essay? How about receiving a customized one?

Check it out