Create MMPDB ( matched molecular pair )!

Matched molecular pair analysis is very common method to analyze SAR for medicinal chemists. There are lots of publications about it and applications in these area.
I often use rdkit/Contrib/mmpa to make my own MMP dataset.
The origin of the algorithm is described in following URL.

Yesterday, good news announced by @RDKit_org. It is release the package that can make MMPDB.
I tried to use the package immediately.
This package is provided from github repo. And to use the package, I need to install apsw at first. APSW can install by using conda.
And the install mmpdb by python script.

iwatobipen$ conda install -c conda-forge apsw
iwatobipen$ git clone
iwatobipen$ cd mmpdb
iwatobipen$ python install

After success of installation, I could found mmpdb command in terminal.
I used CYP3A4 inhibition data from ChEMBL for the test.
I prepared two files, one has smiles and id, and the another has id and ic50 value.
* means missing value. In the following case, I provided single property ( IC50 ) but the package can handle multiple properties. If reader who is interested the package, please show more details by using mmpdb –help command.

iwatobipen$ head -n 10 chembl_cyp3a4.csv 
iwatobipen$ head -n 10 prop.csv 
924282	*
605	*
1698776	*
59721	19952.62
759749	2511.89
819161	2511.89

mmdb fragment has –cut-smarts option.
It seems attractive for me! 😉
–cut-smarts SMARTS alternate SMARTS pattern to use for cutting (default:
[CH2])]’), or use one of: ‘default’,
‘cut_AlkylChains’, ‘cut_Amides’, ‘cut_all’,
‘exocyclic’, ‘exocyclic_NoMethyl’
Next step, make mmpdb and join the property to db.

# run fragmentation and my input file has header, delimiter is comma ( default is white space ). Output file is cyp3a4.fragments.
# Each line of inputfile must be unique!
iwatobipen$ mmpdb fragment chembl_cyp3a4.csv --has-header --delimiter 'comma' -o cyp3a4.fragments
# rung indexing with fragmented file and create a mmpdb. 
iwatobipen$ mmpdb index cyp3a4.fragments -o cyp3a4.mmpdb

OK I got cyp3a4.mmpdb file. (sqlite3 format)
Add properties to a DB.
Type following command.

iwatobipen$ mmpdb loadprops -p prop.csv cyp3a4.mmpdb
Using dataset: MMPs from 'cyp3a4.fragments'
Reading properties from 'prop.csv'
Read 1 properties for 17143 compounds from 'prop.csv'
5944 compounds from 'prop.csv' are not in the dataset at 'cyp3a4.mmpdb'
Imported 5586 'STANDARD_VALUE' records (5586 new, 0 updated).
Generated 83759 rule statistics (1329408 rule environments, 1 properties)
Number of rule statistics added: 83759 updated: 0 deleted: 0
Loaded all properties and re-computed all rule statistics.

Ready to use DB. Let’s play with the DB.
Identify possible transforms.

iwatobipen$ mmpdb transform --smiles 'c1ccc(O)cc1' cyp3a4.mmpdb --min-pair 10 -o transfom_res.txt
iwatobipen$ head -n3 transfom_res.txt 
1	CC(=O)NCCO	[*:1]c1ccccc1	[*:1]CCNC(C)=O	0	59SlQURkWt98BOD1VlKTGRkiqFDbG6JVkeTJ3ex3bOA	1049493	14	3632	5313.6	-0.71409	-0.033683	-6279.7	498.81	2190.5	7363.4	12530	-2.5576	0.023849
2	CC(C)CO	[*:1]c1ccccc1	[*:1]CC(C)C	0	59SlQURkWt98BOD1VlKTGRkiqFDbG6JVkeTJ3ex3bOA	1026671	20	7390.7	8556.1	-1.1253	-0.082107	-6503.9	-0	8666.3	13903	23534	-3.863	0.0010478

Output file has information of transformation with statistics values.
And the db can use to make a prediction.
Following command can generate two files with prefix CYP3A-.

iwatobipen$ mmpdb predict --reference 'c1ccc(O)cc1' --smiles 'c1ccccc1' cyp3a4.mmpdb  -p STANDARD_VALUE --save-details --prefix CYP3A
iwatobipen$ head -n 3 CYP3A_pairs.txt
rule_environment_id	from_smiles	to_smiles	radius	fingerprint	lhs_public_id	rhs_public_id	lhs_smiles	rhs_smiles	lhs_value	rhs_value	delta
868610	[*:1]O	[*:1][H]	0	59SlQURkWt98BOD1VlKTGRkiqFDbG6JVkeTJ3ex3bOA	1016823	839661	C[C@]12CC[C@@H]3[C@H](CC[C@H]4C[C@@H](O)CC[C@@]43C)[C@@H]1CC[C@H]2C(=O)CO	CC(=O)[C@@H]1CC[C@H]2[C@H]3CC[C@H]4C[C@@H](O)CC[C@]4(C)[C@@H]3CC[C@@]21C	1000	15849	14849
868610	[*:1]O	[*:1][H]	0	59SlQURkWt98BOD1VlKTGRkiqFDbG6JVkeTJ3ex3bOA	3666	47209	O=c1c(O)c(-c2ccc(O)c(O)c2)oc2cc(O)cc(O)c12	O=c1cc(-c2ccc(O)c(O)c2)oc2cc(O)cc(O)c12	15849	5011.9	-10837
iwatobipen$ head -n 3 CYP3A_rules.txt 
rule_environment_statistics_id	rule_id	rule_environment_id	radius	fingerprint	from_smiles	to_smiles	count	avg	std	kurtosis	skewness	min	q1	median	q3	max	paired_t	p_value
28699	143276	868610	0	59SlQURkWt98BOD1VlKTGRkiqFDbG6JVkeTJ3ex3bOA	[*:1]O	[*:1][H]	16	-587.88	14102	-0.47579	-0.065761	-28460	-8991.5	-3247.8	10238	23962	0.16674	0.8698
54091	143276	1140189	1	tLP3hvftAkp3EUY+MHSruGd0iZ/pu5nwnEwNA+NiAh8	[*:1]O	[*:1][H]	15	-1617	13962	-0.25757	-0.18897	-28460	-9534.4	-4646	7271.1	23962	0.44855	0.66062

It is worth that the package ca handle not only structure based information but also properties.
I learned a lot of things from the source code.
RDKit org is cool community!
I pushed my code to my repo.

original repo URL is
Do not miss it!


3d conformer fingerprint calculation using RDKit # RDKit

Recently, attractive article was published in ACS journal.
The article describes how to calculate 3D structure based fingerprint and compare some finger prints that are well known in these area.
New method called “E3FP” is algorithm to calculate 3D conformer fingerprint like Extended Connectivity Fingerprint(ECFP). E3FP encodes information only atoms that are connected but also atoms that are not connected.

The author showed several examples. Very similar in 2D but not similar in 3D and vice versa.
Also compare E3FP similarity and ROCS score( TANIMOTO COMBO ) and showed good performance.
I was interested in the fingerprint. Fortunately, the author published the code in Anaconda cloud!!!!!!!
Install it and use it ASAP. ;-D
I am mac user, so installation is very very very easy! Just type.
I found some tips to use the package.
At first, molecules need _Name property to perform calculation.
Second, mol_from_sdf can read molecule from sdf but the function can not read sdf that has multiple molecules. So, I recommend to use molecule list instead of SDF.

conda install -c sdaxen sdaxen_python_utilities
conda install -c keiserlab e3fp

I used CDK2.sdf for test.
E3FP calculates unfolded finger print. But it can convert folded fingerprint and rdkit fingerprint using flod and to_rdkit function.

%matplotlib inline
import pandas as pd
import numpy as np
from rdkit import Chem
from e3fp.fingerprint.generate import fp, fprints_dict_from_mol
from e3fp.conformer.generate import generate_conformers
from rdkit.Chem.Draw import IPythonConsole
from rdkit.Chem import Draw
from rdkit.Chem import DataStructs
from rdkit.Chem import AllChem
# this sdf has 3D conformer, so I do not need to generate 3D conf.
mols = [ mol for mol in Chem.SDMolSupplier( "cdk2.sdf", removeHs=False ) ]
fpdicts = [ fprints_dict_from_mol( mol ) for mol in mols ]
# get e3fp fingerprint
# if molecule has multiple conformers the function will generate multiple fingerprints.
fps = [ fp[5][0] for fp in fpdicts]
# convert to rdkit fp from e3fp fingerprint
binfp = [ fp.fold().to_rdkit() for fp in fps ]
# getmorganfp
morganfp = [ AllChem.GetMorganFingerprintAsBitVect(mol,2) for mol in mols ]

# calculate pair wise TC
df = {"MOLI":[], "MOLJ":[], "E3FPTC":[], "MORGANTC":[],"pairidx":[]}
for i in range( len(binfp) ):
    for j in range( i ):
        e3fpTC = DataStructs.TanimotoSimilarity( binfp[i], binfp[j] )
        morganTC = DataStructs.TanimotoSimilarity( morganfp[i], morganfp[j] )
        moli = mols[i].GetProp("_Name")
        molj = mols[j].GetProp("_Name")
        df["MOLI"].append( moli )
        df["MOLJ"].append( molj )
        df["E3FPTC"].append( e3fpTC )
        df["MORGANTC"].append( morganTC )
        df["pairidx"].append( str(i)+"_vs_"+str(j) )

The method is fast and easy to use. Bottle neck is how to generate suitable conformer(s).
Readers who interested in the package, please check the authors article.

I pushed my sample code to my repo.

Quantum chemistry calculation with python.

In this weekend, I played molecular design toolkit.
This is very nice open source tool kit for pythonist I think. At first, I tried to install the TK in OSx directly, but I had some trouble to run the code. So I installed my linux virtual PC. It is not so difficult to install molecular design TK in linux environment.
At frist mdt can install by using pip.

pip install moldesign

And next, I installed some packages by using conda.
I recommend to install pyscf ( quantum chemistry package for python ) by using conda. Because I had some troubles when I build pyscf from source code.

conda install -c moldesign nbmolviz 
conda install -c conda-forge widgetsnbextension
conda install -c pyqc pyscf 
conda install -c omnia openmm
conda install -c omnia pdbfixer 

And also I installed openbabel by apt-get.

apt-get install openbabel python-openbabel

OK, Now ready!
Let’s start coding.
Today, I tried to simple example.
Read string, generate 3D, calculate orbitals and visualize it.
Following code is running on jupyter notebook. 😉

import moldesign as mdt
import moldesign.units as u
import pybel
# read molecule from smiles and generate 3D conf and save sdf.
mol = pybel.readstring( "smi","C1=NC2=C(N1)C(=NC=N2)N" )

Read and draw"adenine.sdf")

Draw function draws 2D and 3D structure.

Next, calculate energy and draw molecular orbital.

mol.set_energy_model( mdt.models.RHF, basis='sto-3g')
prop = mol.calculate()
print( prop.keys() )
print( "Energy: ", prop['potential_energy'])
['positions', 'mulliken', 'wfn', 'potential_energy', 'dipole_moment']
('Energy: ', <Quantity(-12479.0741253, 'eV')>)

Works fine. draw_orbitals() can draw some orbitals like HOMO, LUMO, and any other energy level orbitals.

Finally, minimize it.
And draw orbitals.

mintraj = mol.minimize()

Starting geometry optimization: built-in gradient descent
Starting geometry optimization: SciPy/bfgs with analytical gradients
Step 2/20, ΔE=-1.858e-01 eV, RMS ∇E=4.161e-01, max ∇E=1.305e+00 eV / ang
Step 4/20, ΔE=-2.445e-01 eV, RMS ∇E=1.818e-01, max ∇E=6.069e-01 eV / ang
Step 6/20, ΔE=-2.589e-01 eV, RMS ∇E=1.359e-01, max ∇E=4.905e-01 eV / ang
Step 8/20, ΔE=-2.620e-01 eV, RMS ∇E=1.250e-01, max ∇E=5.032e-01 eV / ang
Step 10/20, ΔE=-2.660e-01 eV, RMS ∇E=1.264e-01, max ∇E=3.384e-01 eV / ang
Step 12/20, ΔE=-2.751e-01 eV, RMS ∇E=1.125e-01, max ∇E=2.966e-01 eV / ang
Step 14/20, ΔE=-2.915e-01 eV, RMS ∇E=2.315e-01, max ∇E=5.801e-01 eV / ang
Step 16/20, ΔE=-2.942e-01 eV, RMS ∇E=2.492e-01, max ∇E=6.325e-01 eV / ang
Step 18/20, ΔE=-2.978e-01 eV, RMS ∇E=2.712e-01, max ∇E=7.771e-01 eV / ang
Step 20/20, ΔE=-3.016e-01 eV, RMS ∇E=2.639e-01, max ∇E=7.127e-01 eV / ang
Warning: Maximum number of iterations has been exceeded.
         Current function value: -12479.375700
         Iterations: 19
         Function evaluations: 26
         Gradient evaluations: 26
Reduced energy from -12479.0741253 eV to -12479.3757001 eV

mintraj object has each steps energy state. And it can be shown as movie.

Also lots of functions are implemented in molecular design tool kit.
I will play with package more and more. 😉
Today’s code was pushed my repository.



Mishima.syk #10に参加した話

 私は普段シーケンスとかNGSみたいな仕事に関わることないので、MinIONなど初めて知りました。 あのサイズで、、、技術、科学の進歩ってすごいですね、、、
 有機合成でもFlow ChemistryやLab on chipなどありますし、集積化高速化は研究のキーになるところですかね。

Platfrom-as-a-Service for Deep Learning.

Yesterday, I enjoyed mishima.syk #10. I uploaded my presentation and code to mishimasyk repo.
I introduced briefly about a PaaS for DL named ‘Floyd’. I think the service is interesting because I can run DL on cloud with GPU!

So, I describe very simple example to start DL with “FLOYD” 😉
At first, Make account from the site.
Next, install command line tools. Just type pip install -U floyd-cli!

# Install floyd-cli
$ pip install -U floyd-cli

Third step, login the floyd.

# from terminal
$ floyd login

Then web browser will launch and the page provides authentication token. Copy and paste it.
Ready to start!
Let’s play with floyd.
Fist example is iris dataset classification using sklearn.

import numpy as np
from sklearn.datasets import load_iris
from sklearn.svm import SVC
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report
dataset = load_iris()
X =
y =

trainx, testx, trainy, testy = train_test_split( X, y, test_size=0.2,random_state = 123 )

svc = SVC( kernel='rbf' ) trainx, trainy )

rfc = RandomForestClassifier() trainx, trainy )

predsvc = svc.predict( testx )
predrf = rfc.predict( testx )

print( classification_report(testy, predsvc ))

Use floyd run command to start the code after initialize the project.

$ mkdir test_pj
$ cd test_pj
$ floyd init
$ floyd run 'python'
Creating project run. Total upload size: 168.9KiB
Syncing code ...
[================================] 174656/174656 - 00:00:02
RUN ID                  NAME                     VERSION
----------------------  ---------------------  ---------
xxxxxxxx  iwatobipen/test_pj:10         10

To view logs enter:
    floyd logs xxxxxxxx

I could check the status via web browser.

Next run the DNN classification model.
It is very very simple example. not so deeeeeeeeeeeep.

mport numpy as np
from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.utils import np_utils

dataset = load_iris()
X =

xdim = 4
y =
y = np_utils.to_categorical( y, 3 )
trainx, testx, trainy, testy = train_test_split( X, y, test_size=0.2,random_state = 123 )

model = Sequential()
model.add( Dense( 16, input_dim = xdim  ) )
model.add( Activation( 'relu' ))
model.add( Dense( 3 ))
model.add( Activation( 'softmax' ))
model.compile( loss = 'categorical_crossentropy',
               optimizer = 'rmsprop',
               metrics = ['accuracy'])

hist = trainx, trainy, epochs = 50, batch_size = 1 )
classes = model.predict( testx, batch_size = 1 )

print( [ np.argmax(i) for i in classes ] )
print( [ np.argmax(i) for i in testy ] )
loss, acc = model.evaluate( testx, testy )

print( "loss, acc ={0},{1}".format( loss, acc ))

To run the code in the same manner.

iwatobipen$ floyd run 'python'
Creating project run. Total upload size: 168.9KiB
Syncing code ...
[================================] 174653/174653 - 00:00:02
RUN ID                  NAME                     VERSION
----------------------  ---------------------  ---------
xxxxxxx  iwatobipen/test_pj:11         11

To view logs enter:
    floyd logs xxxxxxx

Check the log from web site.

2017-07-09 01:51:37,703 INFO - Preparing to run TaskInstance <TaskInstance: iwatobipen/test_pj:11 (id: Uus7cp996732cBWdgt3nz3) (checksum: 144078ab50a63ea6276efee221669d13) (last update: 2017-07-09 01:51:37.694913) [queued]>
2017-07-09 01:51:37,723 INFO - Starting attempt 1 at 2017-07-09 01:51:37.708707
2017-07-09 01:51:38,378 INFO - adding pip install -r floyd_requirements
2017-07-09 01:51:38,394 INFO - Executing command in container: stdbuf -o0 sh
2017-07-09 01:51:38,394 INFO - Pulling Docker image: floydhub/tensorflow:1.1.0-py3_aws.4
2017-07-09 01:51:39,652 INFO - Starting container...
2017-07-09 01:51:39,849 INFO -

2017-07-09 01:51:39,849 INFO - Run Output:
2017-07-09 01:51:40,317 INFO - Requirement already satisfied: Pillow in /usr/local/lib/python3.5/site-packages (from -r floyd_requirements.txt (line 1))
2017-07-09 01:51:40,320 INFO - Requirement already satisfied: olefile in /usr/local/lib/python3.5/site-packages (from Pillow->-r floyd_requirements.txt (line 1))
2017-07-09 01:51:43,354 INFO - Epoch 1/50
2017-07-09 01:51:43,460 INFO - 1/120 [..............................] - ETA: 8s - loss: 0.8263 - acc: 0.0000e+00
 58/120 [=============>................] - ETA: 0s - loss: 1.5267 - acc: 0.6552
115/120 [===========================>..] - ETA: 0s - loss: 1.2341 - acc: 0.6522
120/120 [==============================] - 0s - loss: 1.2133 - acc: 0.6583
2017-07-09 01:51:43,461 INFO - Epoch 2/50
 57/120 [=============>................] - ETA: 0s - loss: 0.1135 - acc: 0.9649
115/120 [===========================>..] - ETA: 0s - loss: 0.1242 - acc: 0.9739
120/120 [==============================] - 0s - loss: 0.1270 - acc: 0.9750
2017-07-09 01:51:48,660 INFO - Epoch 50/50
2017-07-09 01:51:48,799 INFO - 1/120 [..............................] - ETA: 0s - loss: 0.0256 - acc: 1.0000
 57/120 [=============>................] - ETA: 0s - loss: 0.0911 - acc: 0.9825
114/120 [===========================>..] - ETA: 0s - loss: 0.1146 - acc: 0.9737
120/120 [==============================] - 0s - loss: 0.1161 - acc: 0.9750
2017-07-09 01:51:48,799 INFO - [1, 2, 2, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 2, 2, 2, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 2, 2, 0]
2017-07-09 01:51:48,800 INFO - [1, 2, 2, 1, 0, 2, 1, 0, 0, 1, 2, 0, 1, 2, 2, 2, 0, 0, 1, 0, 0, 2, 0, 2, 0, 0, 0, 2, 2, 0]
2017-07-09 01:51:48,800 INFO - 30/30 [==============================] - 0s
2017-07-09 01:51:48,800 INFO - loss, acc =0.23778462409973145,0.8666666746139526

The following software packages (in addition to many other common libraries) are available in all the environments:
h5py, iPython, Jupyter, matplotlib, numpy, OpenCV, Pandas, Pillow, scikit-learn, scipy, sklearn

Also, user can install additional packages from pypi. ( Not anaconda … 😦 ) To install that, put file named ‘floyd_requirements.txt’ in the project folder.

In summary, Floyd is very interesting service. Easy to set up DL environment and use GPU on cloud.
I want to support anaconda in FLOYD, because I want to use chemoinformatics package like RDKit, Openbabel etc…