Book Volume 2
Preface
Page: i-ii (2)
Author: Zaheer Ul-Haq and Jeffry D. Madura
DOI: 10.2174/9781608059782115020001
List of Contributors
Page: iii-v (3)
Author: Zaheer Ul-Haq and Jeffry D. Madura
DOI: 10.2174/9781608059782115020002
The Use of Dedicated Processors to Accelerate the Identification of Novel Antibacterial Peptides
Page: 3-26 (24)
Author: Gabriel del Rio, Miguel Arias-Estrada and Carlos Polanco González
DOI: 10.2174/9781608059782115020003
Abstract
In the past decades, the procedure to identify novel antibiotic compounds has been motivated by the heuristic discovery of the antibiotic penicillin by Fleming in 1929. Since then, researches have been isolating compounds from very wide range of living forms with the hope of repeating Fleming’s story. Yet, the rate of discovery of new pharmaceutical compounds has reached a plateau in the last decade and this has promoted the use of alternative approaches to identify antibiotic compounds. One of these approaches uses the accumulated information on pharmaceutical compounds to predict new ones using high-performance computers. Such approach brings up the possibility to screen for millions of compounds in computer simulations. The better predictors though use sophisticated algorithms that take up significant amount of computer time, reducing the number of compounds to analyze and the likelihood to identify potential antibiotic compounds. At the same time, the appearance of computer processors that may be tailored to perform specific tasks by the end of the past century provided a tool to accelerate high-performance computations. The current review focuses on the use of these dedicated processor devices, particularly Field Programmable Gate Arrays and Graphic Processing Units, to identify new antibacterial peptides. For that end, we review some of the common computational methods used to identify antibacterial peptides and highlight the difficulties and advantages these algorithms present to be coded into FPGA/GPU computational devices. We discuss the potential of reaching supercomputing performance on FPGA/GPU, and the approaches for parallelism on these platforms.
Computational Chemistry for Photosensitizer Design and Investigation of DNA Damage
Page: 27-70 (44)
Author: Kazutaka Hirakawa
DOI: 10.2174/9781608059782115020004
Abstract
Computational chemistry can be used for the prediction of photochemical reactivity and the design of photosensitizers for cancer phototherapy. For example, the activity of a photosensitizer for DNA damage can be estimated from the calculation of the HOMO energy of the molecules. In general, DNA damage is mediated by the following two processes: 1) photo-induced electron transfer from the DNA base to the photoexcited photosensitizer and 2) base modification by singlet oxygen generation through photoenergy transfer from the photosensitizer to oxygen. The DNA-damaging activity of the photosensitizer through electron transfer is closely related to the HOMO energy level of the molecule. It has been demonstrated that the extent of DNA damage photosensitized by xanthone analogues is proportional to the energy gap between the HOMO level of the photosensitizer and that of guanine. In addition, computational chemistry can be used to investigate the mechanism of the chemopreventive effect on phototoxicity. Furthermore, the molecular orbital calculation is useful to design a photosensitizer in which the activity of singlet oxygen generation is controlled by DNA recognition. Singlet oxygen is an important reactive oxygen species to attack cancer. The control of singlet oxygen generation by DNA is necessary to achieve the tailor-made cancer photo-therapy. Several porphyrin photosensitizers have been designed on the basis of the molecular orbital calculation to control the activity of singlet oxygen generation.
How to Judge Predictive Quality of Classification and Regression Based QSAR Models?
Page: 71-120 (50)
Author: Kunal Roy and Supratik Kar
DOI: 10.2174/9781608059782115020005
PDF Price: $30
Abstract
Quantitative structure-activity relationship (QSAR) is a statistical modelling approach that can be used in drug discovery, environmental fate modeling, property and activity prediction of new, untested compounds. Validation has been identified as one of the important steps for checking the robustness and reliability of QSAR models. Various methodological aspects of validation of QSARs have been a subject of strong debate within the academic and regulatory communities. One of the principles (Principle 4) of the Organization for Economic Cooperation and Development (OECD) refers to the need to establish “appropriate measures of goodness-of-fit, robustness and predictivity” for any QSAR model. Validation strategies are recognized decisive steps to check the statistical acceptability and applicability of the constructed models on a new set of data in order to judge the confidence of predictions. Validation is a holistic practice that comprises evaluation of issues such as quality of data, applicability of the model for prediction purpose and mechanistic interpretation in addition to statistical judgment. Validation strategies are largely dependent on various validation metrics. Viewing the importance of QSAR validation approaches and different validation parameters in the development of successful and acceptable QSAR models, we herein focus to have an overview of different traditional as well as relatively new validation metrics used to judge the quality of the regression as well as classification based QSAR models.
Density Functional Studies of Bis-alkylating Nitrogen Mustards
Page: 121-186 (66)
Author: Pradip Kr. Bhattacharyya, Sourab Sinha, Nabajit Sarmah and Bhabesh Chandra Deka
DOI: 10.2174/9781608059782115020006
PDF Price: $30
Abstract
Nitrogen mustards are the most extensively used chemotherapeutic agent since their evolution in the mid-1940s. The high degree of cytotoxicity of these drugs is attributed to their ability to form DNA interstrand cross-linked adducts, thereby inhibiting DNA replication. Interstrand cross-linking occurs via formation of an unstable intermediate, the aziridinium ion and formation of mono-adducts. Mustine, the first member of this family, suffers from some serious drawbacks such as high rate of hydrolysis. Therefore its stable analogs have been sought; and since its discovery hundreds of analogs have been synthesized.
This article presents a brief introduction to nitrogen mustards and deliberates on the works already devoted to establishing the mechanism of action of this class of drug. A brief discussion on DFT and DFRT is also furnished in section 1.2. Further, computational studies performed on nitrogen mustards are discussed in section 1.3 and 1.4. Section 1.4 of the article consists of research works from our group and has special reference to DFT and DFRT.
From Conventional Prodrugs to Prodrugs Designed by Molecular Orbital Methods
Page: 187-249 (63)
Author: Rafik Karaman
DOI: 10.2174/9781608059782115020007
Abstract
In this chapter we attempt to present a novel prodrug approach which is based on enzyme models that have been advocated to understand the mechanism by which enzymes catalyze biochemical transformations. The tool exploited in the design of novel prodrugs is computational calculations using molecular orbital (MO) and molecular mechanics (MM) methods and correlations between experimental and calculated rate values for some intramolecular processes. In this approach, no enzyme is needed to catalyze the intraconversion of a prodrug to its active parent drug. The conversion rate is solely determined by the factors affecting the rate limiting step in the intramolecular (interconversion) process. Knowledge gained from unraveling the mechanisms of the studied enzyme models (cyclization of Bruice’s dicarboxylic semiesters and acid-catalyzed hydrolysis of Kirby’s N-alkylmaleamic acids) was exploited in the design. It is believed that the use of this approach might eliminate all disadvantages related to prodrug interconversion by the metabolic approach (enzyme catalyzed process). By utilizing this approach we have succeeded to design novel prodrugs for a number of commonly used drugs such as the anti-bleeding agent, tranexamic acid, the antihypertensive agent, atenolol, the pain killer agent, paracetamol, and the antibacterial agents, amoxicillin, cephalexin and cefuroxime. In vitro studies have shown that in contrast to the active drugs (atenolol, paracetamol, amoxicillin and cephalexin) which possess bitter sensation, the corresponding prodrugs were bitterless. Hence, it is expected that patient compliance especially in the pediatric and geriatric population will be significantly increased.
Structural and Vibrational Investigation on a Benzoxazin Derivative with Potential Antibacterial Activity
Page: 250-280 (31)
Author: María V. Castillo, Elida Romano, Ana B. Raschi and Silvia A. Brandán
DOI: 10.2174/9781608059782115020008
PDF Price: $30
Abstract
In this chapter, the structural and vibrational properties of 2-(4- methylphenyl)-4H-3,1-benzoxazin-4-one were studied by using the available experimental infrared spectrum and the hybrid B3LYP/6-31G*and B3LYP/6- 311++G** methods. The bonds order, charge-transfers and stabilization energies for the compound were calculated employing the Natural Bond Orbital (NBO) analysis while the topological properties at the same levels of theory were calculated using the Atoms in Molecules theory (AIM). Furthermore, the frontier molecular HOMO and LUMO orbitals for the compound were also computed and later the values were compared with those reported for 2-(4-chlorophenyl)-4H-3,1-benzoxazin-4-one and 2-phenyl-4H-3,1- benzoxazin-4-one. On the other hand, the harmonic vibrational frequencies at the same levels of theory were calculated using the optimized geometries of the compound. Then, the Pulay´s scaled quantum mechanical force field (SQMFF) methodology was used together with the corresponding normal internal coordinates in order to perform the complete assignment of the vibrational spectra. In addition, the scaled force constants were also presented together with the force fields by using both levels of approximation. In this chapter, the Raman spectrum for the compound at the B3LYP/6-31G* level of theory was predicted.
First Principles Computational Biochemistry with deMon2k
Page: 281-325 (45)
Author: A. Alvarez-Ibarra, P. Calaminici, A. Goursot, C. Z. Gómez-Castro, R. Grande-Aztatzi, T. Mineva, D. R. Salahub, J. M. Vásquez-Pérez, A. Vela, B. Zuniga-Gutierrez and A. M. Köster
DOI: 10.2174/9781608059782115020009
PDF Price: $30
Abstract
The growth of computational power, provided by new hardware technologies and the development of better theoretical methods and algorithms, allows more than ever an improvement in the reliability of computational predictions in medical sciences, along with a better understanding of the underlying molecular mechanisms. However, one limitation of computational chemistry approaches in the field of biological systems is the complexity of the molecules and the environment in which such molecules are to be studied. Important issues such as the determination of molecular properties which depend on the electronic structure face a considerable challenge when all-electron methodologies are required in the investigation. The most rigorous and sophisticated electronic structure methodologies, like density functional theory (DFT), are usually overwhelmed by the molecular size of most pharmacological targets. However, important implementations were recently achieved by the developers group of the computational chemistry code deMon2k. Knowing that the computation of electrostatic interaction integrals is an important bottleneck in all-electron calculations three new implementations have been worked out in order to eliminate such bottleneck. These implementations allow deMon2k now to explore biological and pharmacological systems in the framework of all-electron DFT methodologies.
Recent Advances in Computational Simulations of Lipid Bilayer Based Molecular Systems
Page: 326-388 (63)
Author: R. Galeazzi, E. Laudadio and L. Massaccesi
DOI: 10.2174/9781608059782115020010
PDF Price: $30
Abstract
Computer simulations in lipid bilayers research has become prominent for the last couple of decades. As computational resources became more available to the scientific community, simulations play an increasingly important role in understanding the processes that take place in and across cell membranes. The scientific interest is strictly related to the Biological importance of the Biomembranes, which act as barriers separating cell’s internal environment from the external one. Membranes are selectively permeable, and thus they actively participate in the movement control of compounds into and outside cells. These membranes have an heterogeneous complex composition and they include many different lipids together with proteins, steroids, carbohydrates and other membrane-associated molecules. Each of these compounds are involved in a great number of cellular processes and thus, membranes exist as dynamic structures. As a consequence, the understanding of biomembrane functioning requires the knowledge of chemical-physical behavior of lipid bilayers and it represents a great challenge in biophysical and medical sciences.
In the last decades, molecular dynamics (MD) simulations have become one of the most useful tool in the in silico investigations of molecular structures; in fact, such computations provide structural dynamical information which is essential and hardly obtained by experimental methods; furthermore, it furnishes a system real-time imaging at atomistic-level resolution. In this chapter, we want to point out the recent advances in computer simulations in the field of lipid bilayers and proteins-lipid bilayers systems during the last few years, by covering several selected subjects such as state of art in ad hoc force fields’ development, Cholesterol induced effects on structure and properties of the bilayer, mixed composition lipid matrix, and biomolecular application of coarsegrained models.
Data Quality Assurance and Statistical Analysis of High Throughput Screenings for Drug Discovery
Page: 389-425 (37)
Author: Yang Zhong, Zuojun Guo and Jianwei Che
DOI: 10.2174/9781608059782115020011
PDF Price: $30
Abstract
High throughput screening (HTS) is an important tool in modern drug discovery processes. Many recent, successful drugs can be traced back to HTS [1]. This platform has proliferated from pharmaceutical industry to national labs (e.g. NIH Molecular Libraries Screening Centers Network), and to academic institutions. Besides throughput improvements from thousand molecules in early times to multimillion molecules now, it has been adapted to increasingly sophisticated biological assays such as high content imaging. The vast amount of biological data from these screens presents a significant challenge for identifying interesting molecules in various biological processes. Due to the intrinsic noise of HTS and complex biological processes in most assays, HTS results need careful analysis to identify reliable hit molecules. Various data normalization and analysis algorithms have been developed by different groups over the years. In this chapter, we briefly describe some common issues encountered in HTS and related analysis.
Subject Index
Page: 426-431 (6)
Author: Zaheer Ul-Haq and Jeffry D. Madura
DOI: 10.2174/9781608059782115020012
Introduction
Frontiers in Computational Chemistry presents contemporary research on molecular modeling techniques used in drug discovery and the drug development process: computer aided molecular design, drug discovery and development, lead generation, lead optimization, database management, computer and molecular graphics, and the development of new computational methods or efficient algorithms for the simulation of chemical phenomena including analyses of biological activity. The Second volume of this series features nine different articles covering topics such as antibacterial drug discovery, high throughput screening, computational biochemistry with deMon2k, lipid bilayer analysis and much more.