Molecular property regression with Attentive FP #RDKit #Chemoinformatics #DGL #DeepGraphLibrary

Recently Molecular Graph based deep learning is hot are in chemoinformatics.
Some months ago, Zhaoping et al. published new graph based QSAR model named ‘Attentive FP’ in JMC.

As its name suggests, Attentive FP uses attention mechanism for its architecture.

The authors disclosed their code. And fortunately, recent version of DGL is also Attentive FP!
Its repository provides an example of molecular property regression with attentive fp. However it is difficult to understand if I would like to use the FP against my dataset.
So I updated DGL and tried to use attentive FP. In the following code I used solubility data that is provided from rdkit for my practice.

First, import several packages for deep learning. DGL has many function for chemoinformatics task. Used doesn’t need implement functions which are required for chemo to graph conversion.

%matplotlib inline 
import matplotlib.pyplot as plt
import os
from rdkit import Chem
from rdkit import RDPaths

import dgl
import numpy as np
import random
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
from dgl import model_zoo

from dgl.data.chem.utils import mol_to_complete_graph, mol_to_bigraph

from dgl.data.chem.utils import atom_type_one_hot
from dgl.data.chem.utils import atom_degree_one_hot
from dgl.data.chem.utils import atom_formal_charge
from dgl.data.chem.utils import atom_num_radical_electrons
from dgl.data.chem.utils import atom_hybridization_one_hot
from dgl.data.chem.utils import atom_total_num_H_one_hot
from dgl.data.chem.utils import one_hot_encoding
from dgl.data.chem import CanonicalAtomFeaturizer
from dgl.data.chem import CanonicalBondFeaturizer
from dgl.data.chem import ConcatFeaturizer
from dgl.data.chem import BaseAtomFeaturizer
from dgl.data.chem import BaseBondFeaturizer

from dgl.data.chem import one_hot_encoding
from dgl.data.utils import split_dataset

from functools import partial
from sklearn.metrics import roc_auc_score

Then I defined some helper function for the task. Almost of the codes are borrowed from original dgl/example. Thanks for sharing the nice code!

def chirality(atom):
    try:
        return one_hot_encoding(atom.GetProp('_CIPCode'), ['R', 'S']) + \
               [atom.HasProp('_ChiralityPossible')]
    except:
        return [False, False] + [atom.HasProp('_ChiralityPossible')]
    
def collate_molgraphs(data):
    """Batching a list of datapoints for dataloader.
    Parameters
    ----------
    data : list of 3-tuples or 4-tuples.
        Each tuple is for a single datapoint, consisting of
        a SMILES, a DGLGraph, all-task labels and optionally
        a binary mask indicating the existence of labels.
    Returns
    -------
    smiles : list
        List of smiles
    bg : BatchedDGLGraph
        Batched DGLGraphs
    labels : Tensor of dtype float32 and shape (B, T)
        Batched datapoint labels. B is len(data) and
        T is the number of total tasks.
    masks : Tensor of dtype float32 and shape (B, T)
        Batched datapoint binary mask, indicating the
        existence of labels. If binary masks are not
        provided, return a tensor with ones.
    """
    assert len(data[0]) in [3, 4], \
        'Expect the tuple to be of length 3 or 4, got {:d}'.format(len(data[0]))
    if len(data[0]) == 3:
        smiles, graphs, labels = map(list, zip(*data))
        masks = None
    else:
        smiles, graphs, labels, masks = map(list, zip(*data))

    bg = dgl.batch(graphs)
    bg.set_n_initializer(dgl.init.zero_initializer)
    bg.set_e_initializer(dgl.init.zero_initializer)
    labels = torch.stack(labels, dim=0)
    
    if masks is None:
        masks = torch.ones(labels.shape)
    else:
        masks = torch.stack(masks, dim=0)
    return smiles, bg, labels, masks


def run_a_train_epoch(n_epochs, epoch, model, data_loader,loss_criterion, optimizer):
    model.train()
    total_loss = 0
    losses = []
    
    for batch_id, batch_data in enumerate(data_loader):
        batch_data
        smiles, bg, labels, masks = batch_data
        if torch.cuda.is_available():
            bg.to(torch.device('cuda:0'))
            labels = labels.to('cuda:0')
            masks = masks.to('cuda:0')
        
        prediction = model(bg, bg.ndata['hv'], bg.edata['he'])
        loss = (loss_criterion(prediction, labels)*(masks != 0).float()).mean()
        #loss = loss_criterion(prediction, labels)
        #print(loss.shape)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        
        losses.append(loss.data.item())
        

    total_score = np.mean(losses)
    print('epoch {:d}/{:d}, training {:.4f}'.format( epoch + 1, n_epochs,  total_score))
    return total_score

After that, I defined atom and bond featurizer functions. Their settings are same as original repository but it is easy to modify the featurizer.

atom_featurizer = BaseAtomFeaturizer(
                 {'hv': ConcatFeaturizer([
                  partial(atom_type_one_hot, allowable_set=[
                          'B', 'C', 'N', 'O', 'F', 'Si', 'P', 'S', 'Cl', 'As', 'Se', 'Br', 'Te', 'I', 'At'],
                    encode_unknown=True),
                  partial(atom_degree_one_hot, allowable_set=list(range(6))),
                  atom_formal_charge, atom_num_radical_electrons,
                  partial(atom_hybridization_one_hot, encode_unknown=True),
                  lambda atom: [0], # A placeholder for aromatic information,
                    atom_total_num_H_one_hot, chirality
                 ],
                )})
bond_featurizer = BaseBondFeaturizer({
                                     'he': lambda bond: [0 for _ in range(10)]
    })

If you would like to the featurizer as same as DeepChem, you can use CanonicalAtom/BondFeaturizer.
https://docs.dgl.ai/en/latest/generated/dgl.data.chem.CanonicalAtomFeaturizer.html
https://docs.dgl.ai/en/latest/generated/dgl.data.chem.CanonicalBondFeaturizer.html

DGL seems friendly for chemoinformatitian I think.

OK, let’s load dataset. mol_to_bigraph method with featurizer converts rdkit mol object to graph object. Also, smiles_to_bigraph method can convert smiles to graph! Cool ;)

train=os.path.join(RDPaths.RDDocsDir, 'Book/data/solubility.train.sdf')
test=os.path.join(RDPaths.RDDocsDir, 'Book/data/solubility.test.sdf')

train_mols = Chem.SDMolSupplier(train)
train_smi =[Chem.MolToSmiles(m) for m in train_mols]
train_sol = torch.tensor([float(mol.GetProp('SOL')) for mol in train_mols]).reshape(-1,1)

test_mols =  Chem.SDMolSupplier(test)
test_smi = [Chem.MolToSmiles(m) for m in test_mols]
test_sol = torch.tensor([float(mol.GetProp('SOL')) for mol in test_mols]).reshape(-1,1)


train_graph =[mol_to_bigraph(mol,
                           atom_featurizer=atom_featurizer, 
                           bond_featurizer=bond_featurizer) for mol in train_mols]

test_graph =[mol_to_bigraph(mol,
                           atom_featurizer=atom_featurizer, 
                           bond_featurizer=bond_featurizer) for mol in test_mols]

AttentivFp model is provided from model_zoo. And define dataloader for training and test.

model = model_zoo.chem.AttentiveFP(node_feat_size=39,
                                  edge_feat_size=10,
                                  num_layers=2,
                                  num_timesteps=2,
                                  graph_feat_size=200,
                                  output_size=1,
                                  dropout=0.2)
model = model.to('cuda:0')

train_loader = DataLoader(dataset=list(zip(train_smi, train_graph, train_sol)), batch_size=128, collate_fn=collate_molgraphs)
test_loader = DataLoader(dataset=list(zip(test_smi, test_graph, test_sol)), batch_size=128, collate_fn=collate_molgraphs)

model = model_zoo.chem.AttentiveFP(node_feat_size=39,
edge_feat_size=10,
num_layers=2,
num_timesteps=2,
graph_feat_size=200,
output_size=1,
dropout=0.2)
model = model.to('cuda:0')

train_loader = DataLoader(dataset=list(zip(train_smi, train_graph, train_sol)), batch_size=128, collate_fn=collate_molgraphs)
test_loader = DataLoader(dataset=list(zip(test_smi, test_graph, test_sol)), batch_size=128, collate_fn=collate_molgraphs)

Dataloader is pytorch native class. It generates iterator of butch of dataset.
Now almost there! Let’s go to learning process.

loss_fn = nn.MSELoss(reduction='none')
optimizer = torch.optim.Adam(model.parameters(), lr=10 ** (-2.5), weight_decay=10 ** (-5.0),)
n_epochs = 100
epochs = []
scores = []
for e in range(n_epochs):
    score = run_a_train_epoch(n_epochs, e, model, train_loader, loss_fn, optimizer)
    epochs.append(e)
    scores.append(score)

>>>output is below.
epoch 1/100, training 8.8096
----snip---
epoch 98/100, training 0.3706
epoch 99/100, training 0.3915
epoch 100/100, training 0.3003
plt.plot(epochs, scores)

It seems that learning process goes well ;).

OK let’s validate the model!

model.eval()
all_pred = []
for test_data in test_loader:
    smi_lst, bg, labels, masks = test_data
    if torch.cuda.is_available():
            bg.to(torch.device('cuda:0'))
            labels = labels.to('cuda:0')
            masks = masks.to('cuda:0')
    pred = model(bg, bg.ndata['hv'], bg.edata['he'])
    all_pred.append(pred.data.cpu().numpy())
res = np.vstack(all_pred)
plt.clf()
plt.scatter(res, test_sol)
plt.xlabel('pred')
plt.ylabel('exp')
from sklearn.metrics import r2_score
print(r2_score(test_sol, res))
> 0.9098691301661277

Let’s compare to RandomForest.

from sklearn.ensemble import RandomForestRegressor
from rdkit import Chem
from rdkit.Chem import AllChem
train_fp = [AllChem.GetMorganFingerprintAsBitVect(mol,2) for mol in train_mols]
test_fp = [AllChem.GetMorganFingerprintAsBitVect(mol,2) for mol in test_mols]
# make RF regressor and train it.
rfr = RandomForestRegressor()
rfr.fit(train_fp, train_sol)

Check the performance.

rfr_pred = rfr.predict(test_fp)
r2_score(test_sol, rfr_pred)
plt.clf()
plt.scatter(rfr_pred, test_sol)

AttentiveFP model showed high performance for solubility prediction in this case.(The my code of RandomForest is not optimized.) DGL example code is very useful for beginner of DGL but it is difficult to apply to my own dataset. So I need to rewrite the code with my dataset.

Any way, I would like to buy Beer to DGL developper. DGL is very nice package for chemoinformatics and ‘RDKitds’. RDKits is new nickname of rdkit user, it is proposed in RDKit UGM 2019 ;)
Today’s code is below.

my whole code.

Thanks,

Python package for Ensemble learning #Chemoinformatics #Scikit learn

Ensemble learning is a technique for machine learning. I wrote post about blending learning before. URL is below.
https://iwatobipen.wordpress.com/2018/11/11/ensemble-learning-with-scikit-learn-and-xgboost-machine-learning/
I implemented the code by myself at that time.

Ensemble learning sometime outperform than single model. So it is useful for try to use the method. Fortunately now we can use ensemble learning very easily by using a python package named ‘ML-Ens‘ Installation is very easy, only use pip command common way for pythonista I think ;)

After installing the package user can build and train ensemble learning model with few lines. I would like to introduce two example of them one is stacking method and the other is a blending method. OK let’s go to code.

At first, load dataset and make input features. I used morgan fingerprint as input data.

from rdkit import Chem
from rdkit.Chem import DataStructs
from rdkit.Chem import AllChem
from rdkit import RDPaths
from rdkit.Chem.Draw import IPythonConsole
from rdkit.Chem import Draw
from rdkit.Chem import PandasTools
import numpy as np
import pandas as pd
from IPython.display import HTML
traindf = PandasTools.LoadSDF(os.path.join(RDPaths.RDDocsDir,'Book/data/solubility.train.sdf'))
testdf = PandasTools.LoadSDF(os.path.join(RDPaths.RDDocsDir, 'Book/data/solubility.test.sdf'))
# Chek data
HTML(traindf.head(2).to_html())

cls2lab = {'(A) low':0, '(B) medium':1, '(C) high':2}

def fp2np(fp):
    arr = np.zeros((0,))
    DataStructs.ConvertToNumpyArray(fp, arr)
    return arr
trainfp = [AllChem.GetMorganFingerprintAsBitVect(m, 2) for m in traindf.ROMol]
testfp =  [AllChem.GetMorganFingerprintAsBitVect(m, 2) for m in testdf.ROMol]
trainX = np.array([fp2np(fp) for fp in trainfp])
testX = np.array([fp2np(fp) for fp in testfp])
trainY = np.array([cls2lab[i] for i in traindf.SOL_classification.to_list()])
testY =  np.array([cls2lab[i] for i in testdf.SOL_classification.to_list()])

Then import several package for ensemble learning. SuperLearner is class for stacking and BlendEnsemble is class for blending.

Making ensemble model is easy. Just use add method to layer addition and finally call add_meta method for adding final prediction layer.

from mlens.ensemble import SuperLearner
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier
from sklearn.metrics import r2_score, accuracy_score
from sklearn.svm import SVR, SVC

# For stacnking
ensemble = SuperLearner(scorer=accuracy_score, random_state=794, verbose=2)
ensemble.add([RandomForestClassifier(n_estimators=100, random_state=794), SVC(gamma='auto', C=1000)])
ensemble.add_meta(LogisticRegression(solver='lbfgs', multi_class='auto'))

ensemble.fit(trainX, trainY)
pred = ensemble.predict(testX)
accuracy_score(testY, pred)

# Blending
from mlens.ensemble import BlendEnsemble
ensemble2 = BlendEnsemble(scorer=accuracy_score, test_size=0.2, verbose=2)
ensemble2.add([RandomForestClassifier(n_estimators=794, random_state=794),
                 SVC(gamma='auto')])
ensemble2.add_meta(LogisticRegression(solver='lbfgs', multi_class='auto'))
ensemble2.fit(trainX, trainY)
pred_b = ensemble2.predict(testX)
accuracy_score(pred_b, testY)

Also more models can added with add method. I uploaded whole code on my gist. After calling fit, it is easy to access result data by using data method.

code example

Unfortunately the ensemble models described in the post don’t outperform single random forest model but mlens is nice tool for ensemble learning there is still room of improvement for model performance such as kind of models, hyper parameters etc.

Original document give more informations. Please go to link if reader has interest.
http://ml-ensemble.com/docs/

Quantum Chemistry data of drug bank #QCportal #Quantum_Chemistry

I’m still learning QCArchive. I posted qcportal with reaction dataset. And today I tried to retrieve of drug bank from qcportal. QCportal provides not only calculated numeric data but also 3D mol view by using py3Dmol.

OK let’s go to code. get_molecule method provides many data from qcportal web server.

import qcportal as ptl
client = ptl.FractalClient()
ds = client.get_collection("Dataset", "COMP6 DrugBank")
mols = ds.get_molecules()
mols.shape
> (13379, 1)

What kinds of data in the dataset? It is easy to do it, just call some methods.

ds.list_values().reset_index()['method'].unique()
> array(['ωB97x', 'b3lyp', 'b3lyp-d3m(bj)', 'hf', 'pbe', 'pbe-d3m(bj)',
       'svwn', 'wb97m', 'wb97m-d3(bj)'], dtype=object)
ds.list_values().reset_index()['basis'].unique()
> array(['6-31g*', 'def2-tzvp'], dtype=object)

ds.list_values()

This dataset has not only data from psi4 but also gaussian!.

I got data from method=’wB97x’

val = ds.get_values(method='ωB97x')
val.columns
> Index(['CM5 Charges', 'Hirshfeld Charges', 'Energy', 'Gradient',
       'Hirshfeld Dipole', 'Spin Density'],
      dtype='object')

I got energy from the data and visualize molecules.

energy = val['Energy']
mols['molecule'][0].show()
energy[0]
> -636107.9519541461

Py3Dmol works very well. I could get QC energy of molecule in drug bank and could render molecule as 3D object.

It is very cool!

My whole code is uploaded following URL.

Have a nice week end! ;)

https://nbviewer.jupyter.org/github/iwatobipen/playground/blob/master/drug_bank.ipynb

Small molecule MD with openMM #MD #Openforcefield

I updated openforcefield from ver 0.5 to ver 0.6. ForceField of SMIRNOFF is also updated.

I tried to use new version of OpenFF.
At first, I calculated partial charge with semi empirical method ‘AM1-BCC’. Ambertools is used for the calculation, it is easy.

from openforcefield.topology import Molecule
from openforcefield.utils.toolkits import RDKitToolkitWrapper, AmberToolsToolkitWrapper
from openforcefield.topology import Topology
from openforcefield.typing.engines.smirnoff import ForceField
biar = Molecule.from_smiles('c1ccccc1-c1c(C)ccnc1')
#Gerates conformers, default number of generated conformers is 10.
biar.generate_conformers()
biar.compute_partial_charges_am1bcc()

Just finished, check the result. Nitrogen has the most negative charge and neighbor aromatic carbons has positive charges.

for i, atm in enumerate(biar.atoms):
    print(pc[i], atm)
-0.1175 e 
-0.1305 e 
-0.125 e 
-0.1305 e 
-0.1175 e 
-0.036 e 
-0.1543 e 
-0.0243 e 
-0.0648 e 
-0.2513 e 
0.3952 e 
-0.668 e 
0.4062 e 
0.136 e 
0.1335 e 
0.133 e 
0.1335 e 
0.136 e 
0.0527 e 
0.0527 e 
0.0527 e 
0.143 e 
0.0221 e 
0.0251 e 

It seems work fine. OK let’s try to MD calculation.

For convenience, I wrote simple script and config file for calculation.
Following code calculate MD with SMILES as sys.argv[1]

#small_mol_md.py
import yaml
import sys
import os
import time
import matplotlib.pyplot as plt
from openforcefield.topology import Molecule
from openforcefield.topology import Topology
from openforcefield.typing.engines.smirnoff import ForceField
from openforcefield.utils.toolkits import RDKitToolkitWrapper
from openforcefield.utils.toolkits import AmberToolsToolkitWrapper
from simtk import openmm
from simtk import unit
from rdkit import Chem

def run_md(molecule, confId=0):
    off_topology = molecule.to_topology()
    omm_topology = off_topology.to_openmm()
    system = forcefield.create_openmm_system(off_topology)

    time_step = config["time_step"] * unit.femtoseconds
    temperature = config["temperature"] * unit.kelvin
    friction = 1 / unit.picosecond
    integrator = openmm.LangevinIntegrator(temperature, friction, time_step)
    
    conf = molecule.conformers[confId]
    simulation = openmm.app.Simulation(omm_topology,
                                       system,
                                       integrator)
    simulation.context.setPositions(conf)
    if not os.path.isdir('./log'):
        os.mkdir('./log')
    pdb_reporter = openmm.app.PDBReporter('./log/trj.pdb', config["trj_freq"])
    state_data_reporter = openmm.app.StateDataReporter("./log/data.csv",
                                                       config["data_freq"],
                                                       step=True,
                                                       potentialEnergy=True,
                                                       temperature=True,
                                                       density=True)
    simulation.reporters.append(pdb_reporter)
    simulation.reporters.append(state_data_reporter)
    start = time.process_time()
    simulation.step(config["num_steps"])
    end = time.process_time()
    print(f"Elapsed time {end-start:.2f} sec")
    print("Done")

if __name__=="__main__":
    forcefield = ForceField("openff-1.0.0.offxml")
    config = yaml.load(open("mdconf.yml", "r"), yaml.Loader)
    molecule = Molecule.from_smiles(sys.argv[1])
    molecule.generate_conformers()
    run_md(molecule)

And calculation configuration is below.

#mdconfig.yml
time_step: 2
temperature: 300
friction: 1
trj_freq: 1
data_freq: 1
num_steps: 1000

Run calculation.
$ python small_mol_md.py ‘c1ccc(C)cc1-c2c(OC)nccc2’

After the calculation, I could get pdb and csv file.
Pdb file has 1000 states. And CSV file has calculated data.

blue shows energy and red shows temperature

It took ~10 sec for the molecule, it will take long time for large scale calculation.

MD calculation requires many parameters. I’m not familiar for the calculation so started to learn it. Now I installed GROMACS in my PC.

There are lots of things what I would like to learn….

Calculate free solvent accessible surface area #RDKit #Chemoinformatics

Recent version of rdkit has method to calculate FreeSASA.
I never used the function so I used it. So I tried to use it.

I calculated freeSASA with very simple molecules Phenol and hydroxy pyridine.

from rdkit import Chem
from rdkit.Chem import rdFreeSASA
from rdkit.Chem.Draw import IPythonConsole
from rdkit.Chem import AllChem
mol1 = Chem.MolFromSmiles('Oc1ccccc1')
mol2 = Chem.MolFromSmiles('Oc1ccncc1')
hmol1 = Chem.AddHs(mol1)
hmol2 = Chem.AddHs(mol2)
AllChem.EmbedMolecule(hmol1)
AllChem.EmbedMolecule(hmol2)

To calculate FreeSASA, prepare raddii is needed.

radii1 = rdFreeSASA.classifyAtoms(hmol1)
radii2 = rdFreeSASA.classifyAtoms(hmol2)

Now ready, let’s calculate FreeSASA.

rdFreeSASA.CalcSASA(hmol1, radii1)
> 137.43293375181904
rdFreeSASA.CalcSASA(hmol2, radii2)
> 128.34398350646256

At first I expected that FreeSASA of pyridine is larger than phenol but result is opposite. So I would like to know details of the reason.

After calculating FreeSASA, each atom has SASA property.

atoms1 = hmol1.GetAtoms()
atoms2 = hmol2.GetAtoms()
for i in range(len(atoms1)):
    print(atoms1[i].GetSymbol(), atoms1[i].GetProp('SASAClassName'), atoms1[i].GetProp("SASA"))
    
sum(float(a.GetProp("SASA")) for a in atoms1)


O Unclassified 10.276248749137361
C Unclassified 5.6117335908330768
C Unclassified 4.8812286399274658
C Unclassified 4.9178986731131236
C Unclassified 4.923259125887407
C Unclassified 4.8241215955112828
C Unclassified 4.8595021375180254
H Unclassified 16.645522512291386
H Unclassified 16.254710140190241
H Unclassified 15.866400115020539
H Unclassified 16.022421230036539
H Unclassified 16.089713316178983
H Unclassified 16.260173926173582
137.43293375181904

for i in range(len(atoms2)):
    print(atoms2[i].GetSymbol(), atoms2[i].GetProp('SASAClassName'), atoms2[i].GetProp("SASA"))
    
sum(float(a.GetProp("SASA")) for a in atoms2)

O Unclassified 10.443721296042458
C Unclassified 5.5711494477882848
C Unclassified 4.7609239637426501
C Unclassified 4.9640112698257193
N Unclassified 11.64593971756287
C Unclassified 4.6638358234073181
C Unclassified 5.0220733873889731
H Unclassified 16.474508728534751
H Unclassified 16.091035411384464
H Unclassified 16.409635462176684
H Unclassified 16.194368688350266
H Unclassified 16.102780310258137
128.34398350646256

The reason is that phenol has one more hydrogen atom and it occupy more surface area than aromatic nitrogen.

I think the parameter can use a property for GCN.
Today’s sample code is here.

Make RGroup decompose table with old ver. RDKit #RDKit #Chemoinformatics

Objective
Recent version of RDKit has rdRGroupDecomposition module for R Group decomposition. You know, it is very useful module for SAR analysis. However this function is not implemented in old version of rdkit (i.e. rdkit for 32 bit windows). I would like to implement similar module for 32bit windows system.

My approach with code
So, I tried to write RGroup decomposition function. Following code is slow, it needs more improvement for large molecular data sets.
At first import required packages and dataset.

import pandas as pd
from rdkit.Chem import Draw
from rdkit.Chem import PandasTools
from rdkit.Chem.Draw import IPythonConsole
from rdkit import Chem
from rdkit.Chem import AllChem
from rdkit.Chem import rdFMCS
from rdkit.Chem import RWMol
from collections import defaultdict
from rdkit.Chem.rdDepictor import SetPreferCoordGen
SetPreferCoordGen(True)
from rdkit import RDPaths
import os
mols = [m for m in Chem.SDMolSupplier(os.path.join(RDPaths.RDDocsDir,'Book/data/cdk2.sdf'))]
for m in mols:
    AllChem.Compute2DCoords(m)

Then find MCS for molecules. I would like to extract core structure so set some options for FindMCS.

mcs = rdFMCS.FindMCS(mols[:4], completeRingsOnly=True, ringMatchesRingOnly=True)
core = Chem.MolFromSmarts(mcs.smartsString)

Then define RGroupDecomposer class for rgroup decomposition. I remove not in ring atoms from core structure because MedChem often uses it for a scaffold. To do it, RWmol module is used. It is useful for molecular editing. Following code automatically remove molecule which does not have core structure.

class RGroupDecomposer():

    def __init__(self, mols, core):
        self.mols = mols
        self.core = core
        
    def get_scaffold(self):
        emol = RWMol(self.core)
        atms = [a for a in emol.GetAtoms()]
        for a in atms:
            if not a.IsInRing():
                emol.RemoveAtom(a.GetIdx())
        return emol.GetMol()
    
    def rg_decompose(self):
        self.coremol = self.get_scaffold()
        self.match_mols = [mol for mol in self.mols if mol.HasSubstructMatch(self.coremol)]
        self.hmols = [Chem.AddHs(mol) for mol in self.match_mols]
        [AllChem.Compute2DCoords(mol) for mol in self.hmols]
        self.side_chains = [Chem.ReplaceCore(mol, self.coremol) for mol in self.hmols]
        self.core_info = Chem.ReplaceSidechains(self.hmols[0], self.coremol)
        
    def get_df(self):
        data = defaultdict(list)
        for i, side_chains in enumerate(self.side_chains):
            # Thank you for comment! fixed it
            data['mol'].apped(Chem.MolToSmiles(self.match_mols[i]))
            side_chains = Chem.MolToSmiles(side_chains).split('.')
            for idx, side_chain in enumerate(side_chains):
                side_chain = Chem.MolToSmiles(Chem.RemoveHs(Chem.MolFromSmiles(side_chain)))
                data['R_{}'.format(idx+1)].append(side_chain)
            data['core'].append(Chem.MolToSmiles(self.core_info))
            
        df = pd.DataFrame(data)
        return df

# this is helper function to convert dataframe of smiles to dataframe of ROMol.
def allsmi2rdmol(df):
    cols = df.columns.to_list()
    for col in cols:
        PandasTools.AddMoleculeColumnToFrame(df, smilesCol=str(col), molCol=str(col))
    return df

Now I finished all tasks.
Let’s check it. I used CDK2.sdf for test.

rgd = RGroupDecomposer(mols[:40], core)
rgd.rg_decompose()
df = rgd.get_df()
newdf = allsmi2rdmol(df)
from IPython.display import display_html
from IPython.display import HTML
HTML(df.to_html())

It seems work. I used HTML for svg image visualization. Usually this step is not required but my environment can’t render mol image without the step even if RenderImagesInAllDataFrames set True……

I’m not sure how to fix the problem ;-(

Today’s code is uploaded my gist.

code example for RGr decompose

Any comments and suggestions will be greatly appreciated.

Extract chemical information from patent data #pat-informatics #chemoinformatics

As you know, patent informatics is important for drug discovery project. And SureChembl is one of the dataset for chemical structures which are extracted from patent document by OCR. It is worth that it can freely available data source.

I surprised that recently google patents provides chemical data too.

It seems not fully cover all structure but seems cool. Let see the example, URL is below
https://patents.google.com/patent/WO2012125893A1/en?oq=WO2012125893A1.

The machine extracted information including structure is listed in ‘Concept’ table like below. I’m not sure which structures are extracted by machine. So it is not all structure.

The page source is html. I tried to extract the data with python ;).

To parse HTML, I used beautifulsoup. It’s very useful for html parsing. SMILES data is located in ‘concept=>ul=>span(itemprop=smiles)’. And additional information such as name, domain which shows where the data is extracted is provided.

Following code extract some dataset and make pandas data frame.

If the machine can extract all data in Experimental, Description and Claim, it will be powerful tool for pat-informatics.

Google provides many services which are freely available.