Probability
Page: 1-7 (7)
Author: Carlos Polanco*
DOI: 10.2174/9789815080476123010006
PDF Price: $15
Abstract
In this section we review the main Probability operators that are strongly
associated with the main themes of this book which are Discrete-Time Markov
Chain Process, and Continuous-Time Markov Chain Process. the chapter begins
with the basic definition of: certain event, dependent event, independent event and
impossible event. Later we review the concept of conditional probability which
permeates all the following chapters as well as the multiplication rule. At the end
the Bayes’ Theorem is addressed which is the basis of the procedures described in
the last chapters as they are: Discrete and Continuous-Time Markov Chain Process.
All sections are exemplified in the simplest and most complete way possible, so that
the reader does not have difficulty in the use and language of these operators in the
following sections.
Matrix Models
Page: 8-21 (14)
Author: Carlos Polanco*
DOI: 10.2174/9789815080476123010007
PDF Price: $15
Abstract
This chapter describes and provides an example of the matrix models:
Lefkovitch model, Leslie model, Malthus model, and stability matrix models. From
these the Discrete- and Continuous-Time Markov Chain Process is introduced.
These matrix models are presented as they were historically occurring, and it is
highlighted how the matrix structure offers a simple algebraic solution to problems
involving multiple variables, where the elements of those matrices are conditional
probabilities when going from a state A (row i) to a state B (column j). Once these
matrix models have been defined and exemplified, it is shown that the eigenvalues and eigenvectors of the conditional probability matrix determine the long-term
stability matrix of the Markov Chain Process.
Random Walks
Page: 22-27 (6)
Author: Carlos Polanco*
DOI: 10.2174/9789815080476123010008
PDF Price: $15
Abstract
In this chapter a review is made of the main Random walks in plane and
space, and then focus on two random walks that are important to the purpose of
this book: Gaussian-Dimensional Random Walk, and Markov-Dimensional Random Walk. Its definition focuses on a random process where the position at a certain moment depends only on the previous step, this particularity is called Markov
condition and is essencially a Markov Chain Process. Random walks are used in
simulation in different disciplines for their simplicity to handle phenomena involving several variables. Its use in physics, chemistry, ecology, biology, psychology
and economics stands out. In this chapter we do not involve random walks in finite graphs since it is outside the purpose of this work. The definitions of these
processes are accompanied by graphic and analytical examples.
Markov Chain Process
Page: 28-36 (9)
Author: Carlos Polanco*
DOI: 10.2174/9789815080476123010009
PDF Price: $15
Abstract
In this chapter, and from the historical introduction raised in the previous chapters, we introduce and exemplify all the components of a Markov Chain
Process such as: initial state vector, Markov property (or Markov property), matrix of transition probabilities, and steady-state vector. A Markov Chain Process
is formally defined and by way of categorization this process is divided into two
types: Discrete-Time Markov Chain Process and Continuous-Time Markov Chain
Process, which occurs as a result of observing whether the time between states in a
random walk is discrete or continuous. Each of its components is exemplified, and
analytically all the examples are solved.
Discrete-Time Markov Chain Process
Page: 37-50 (14)
Author: Carlos Polanco*
DOI: 10.2174/9789815080476123010010
PDF Price: $15
Abstract
In this chapter, we define the Discrete-Time Markov Chain Process operator, all the initial components seen in the previous chapter are applied, and the
vector of final conditions as known as steady-state vector is defined and exemplified, this vector shows the final state of the process and depends on the initial state
vector and the matrix of transition probabilities. The solution mechanism is shown
both by the iteration of the vector-matrix product and by determining the eigenvalues and eigenvectors of the matrix of transition probabilities. In an effort to categorize the possible matrix of transition probabilities, they are illustrated as reducible
form, trasient form and recurrent form. In an effort to categorize the possible matrix
of transition probabilities, they are illustrated as reducible form, trasient form and
recurrent form. As a direct application of the Discrete-Time Markov process, the
Metropolis Algorithm is presented, as well as a regularity that can be observed in
the matrix of transition of probabilities and that is described in the section Law of
Large Numbers. Some full basic examples are provided to illustrate the definition
and operation of this ramdon walk.
Continuous-Time Markov Chain Process
Page: 51-62 (12)
Author: Carlos Polanco*
DOI: 10.2174/9789815080476123010011
PDF Price: $15
Abstract
In this chapter, we introduce through formal definitions but also with
schematics and fully solved examples the main parts of the random walk Continuous Time Markov Chain Process. This chapter is particularly oriented to the modeling
of waiting lines which are cases of wide applicability in all scientific disciplines.
The chapter begins by describing the Exponential and Poisson distribution which
are articulated in the Continuous-Time Markov Chain Process as the elements of
the matrix of conditional probabilities, and then follow the same methodology of
the discrete case characterizing that matrix in aperiodic or irreducible to finally
solve it as a System of Linear Equations by the usual methods or through its diagonalization by means of its eigenvalues and eigenvectors.
Computational Urban Issues
Page: 63-70 (8)
Author: Carlos Polanco*
DOI: 10.2174/9789815080476123010012
PDF Price: $15
Abstract
This chapter defines a Discrete-Time and Continuous-Time Markov
Chain Process oriented to the flow of people from one point to another in a region
or city, from their transit in different neighbourhoods. This is a current problem
that affects more and more countries due to the growth of communication routes
and means of transport, and that has been modeled under different mathematical
approaches. On the other hand, it is a multifactorial problem. In discrete type modeling we have registered in the matrix of conditional probabilities the conditional
probabilities to go from a region i to another region j. In the case of continuous
type modeling we have considered the rate of pedestrian mobility between regions
Computational Biology Issues
Page: 71-77 (7)
Author: Carlos Polanco*
DOI: 10.2174/9789815080476123010013
PDF Price: $15
Abstract
This chapter defines Discrete and Continuous-Time Markov Chain Process aimed to identify the preponderant function of a protein from the analysis of
its sequence, adapting the matrix of transition probabilities so that the elements of
the latter are occupied by the relative frequencies of the interactions of the pairs
of amino acids located there. The chapter illustrates in detail this methodology and
robustness to rescue the preponderant activity among other possible functions that
the protein could offer, if minimal changes were made in its primary structure. The
present approach is presumed to be used for the construction of synthetic proteins.
This chapter defines Discrete and Continuous-Time Markov Chain Process aimed to identify proteins from the specific regularities found in their sequences.
Computational Financial Issues
Page: 78-86 (9)
Author: Carlos Polanco*
DOI: 10.2174/9789815080476123010014
PDF Price: $15
Abstract
This chapter defines Discrete or Continuous-Time Markov Chain Process
aimed at predicting market trends, taking the ratings that the stock exchange gives
to shares. The chapter is developed through two cases that affect in different ways
the corresponding matrix of transition probabilities, in the discrete case conditional
probabilities are assumed for each of the groups of shares that were registered in
that matrix, and in the continuous case different forward and backward speeds between shares are assumed.
Computational Science Issues
Page: 87-92 (6)
Author: Carlos Polanco*
DOI: 10.2174/9789815080476123010015
PDF Price: $15
Abstract
This chapter introduces a Hierarchical Markov Chain Process over a hierarchical network whose nodes are Discrete-Time Markov Chain and Continuous-Time Markov Chain Processes. We consider it useful to carry out this non-exhaustive
analysis to discuss the advantages and disadvantages of a random walk of this nature and its possible application, particularly in real-time and in unsupervised mode.
Examples are provided under the discrete and continuous schemes.
Computational Medicine Issues
Page: 93-99 (7)
Author: Carlos Polanco*
DOI: 10.2174/9789815080476123010016
PDF Price: $15
Abstract
This chapter first introduces a Discrete-Time Markov Chain Process
aimed to predict the spread of a disease in a region, based on the census of the subjects: S, susceptible; Ia, Active infected; In, Inactive infected; Na Subject dead by
natural causes; Nm Subject killed by the disease. Later, is introduced a Continuous-Time Markov Chain Process to predict the spread of a disease based on different
census of the subjects: S, number of susceptible individuals; I, number of infected
individuals; and R number of recovery individuals. Both methods are known to be
effective in issuing early warnings for serious respiratory infections. Both cases are
exemplified and discussed.
Computational Social Sciences Issues
Page: 100-105 (6)
Author: Carlos Polanco*
DOI: 10.2174/9789815080476123010017
PDF Price: $15
Abstract
This chapter defines a Discrete-Time and Continuous-Time Markov
Chain Process aimed to identify the language used to write a text. This is a brief
introduction to show the usefulness of both random walks in the recognition of a
language, and how these methods can lead to deepen the recognition using other
possible structural language. An example is established and solved from diphthongs
of the English language
Computational Operations Research Issues
Page: 106-111 (6)
Author: Carlos Polanco*
DOI: 10.2174/9789815080476123010018
PDF Price: $15
Abstract
This chapter introduces Discrete and Continuous-Time Markov Chain
Process aimed to predict the behavior of a waiting line, based on the probabilities
of going from the state i to the state j, and also from velocity rates lambda and
retracement mu. In both cases a numerical example is provided that shows the mechanics of both random walks as well as the pertinent observations when altering
these parameters, and discusses the possibility of these parameters being altered in
real time in an unsupervised algorithm.
Computational Information System Issues
Page: 112-117 (6)
Author: Carlos Polanco*
DOI: 10.2174/9789815080476123010019
PDF Price: $15
Abstract
This chapter defines a PageRank System for ranks web pages according to
the transit detected in them. This simulation uses Discrete-Time and Continuous-Time Markov Chain Processes. For both approximations, numerical examples of
both conditional probabilities and transition rate rates are provided. While both
models are treated separately, in the end the desirability of designing a mixed network is discussed.
Future Uses
Page: 118-120 (3)
Author: Carlos Polanco*
DOI: 10.2174/9789815080476123010020
PDF Price: $15
Abstract
This chapter makes a quick review of how the methods studied in this
work, the Discrete and Continuous-Time Markov Chain Processes, can be applied
to different fields and it explores their use con different approaches. It also examines
how the applicability of these random walks can affect diverse disciplines with
different impacts. The implementation of these methods can even have the option
of self-learning programming.
Solutions
Page: 121-140 (20)
Author: Carlos Polanco*
DOI: 10.2174/9789815080476123010021
PDF Price: $15
Introduction
Markov Chain Process: Theory and Cases is designed for students of natural and formal sciences. It explains the fundamentals related to a stochastic process that satisfies the Markov property. It presents 10 structured chapters that provide a comprehensive insight into the complexity of this subject by presenting many examples and case studies that will help readers to deepen their acquired knowledge and relate learned theory to practice. This book is divided into four parts. The first part thoroughly examines the definitions of probability, independent events, mutually (and not mutually) exclusive events, conditional probability, and Bayes' theorem, which are essential elements in Markov's theory. The second part examines the elements of probability vectors, stochastic matrices, regular stochastic matrices, and fixed points. The third part presents multiple cases in various disciplines: Predictive computational science, Urban complex systems, Computational finance, Computational biology, Complex systems theory, and Computational Science in Engineering. The last part introduces learners to Fortran 90 programs and Linux scripts. To make the comprehension of Markov Chain concepts easier, all the examples, exercises, and case studies presented in this book are completely solved and given in a separate section. This book serves as a textbook (either primary or auxiliary) for students required to understand Markov Chains in their courses, and as a reference book for researchers who want to learn about methods that involve Markov Processes.