25 Commits

Author SHA1 Message Date
eugenefischer
756e5572b9 update readme 2022-09-29 00:00:19 -05:00
eugenefischer
c30167d5ec Change real sequence collision so it isn't biased toward sequences in the earlier wells. 2022-09-28 23:15:55 -05:00
eugenefischer
a19525f5bb update readme 2022-09-28 23:01:59 -05:00
eugenefischer
e5803defa3 Bug fix, add comments 2022-09-28 18:09:47 -05:00
eugenefischer
34dc2a5721 Add real sequence collision rate 2022-09-28 17:54:55 -05:00
eugenefischer
fd106a0d73 Add real sequence collision rate 2022-09-28 17:46:09 -05:00
eugenefischer
22faad3414 Add real sequence collision rate 2022-09-28 17:45:09 -05:00
eugenefischer
0b36e2b742 Rewrite countSequences to allow for collision with real sequences on misreads 2022-09-28 17:44:26 -05:00
eugenefischer
9dacd8cd34 Add real sequence collision rate 2022-09-28 17:43:21 -05:00
eugenefischer
89687fa849 Add real sequence collision rate, make fields final 2022-09-28 17:43:06 -05:00
eugenefischer
fb443fe958 Revert "Add getCell and getRandomCell methods"
This reverts commit adebe1542e.
2022-09-28 14:36:20 -05:00
eugenefischer
adebe1542e Add getCell and getRandomCell methods 2022-09-28 13:49:50 -05:00
eugenefischer
882fbfffc6 Purge old code 2022-09-28 13:40:13 -05:00
eugenefischer
a88cfb8b0d Add read counts for individual wells to graphml output 2022-09-28 13:38:38 -05:00
eugenefischer
deed98e79d Bugfix 2022-09-28 12:58:14 -05:00
eugenefischer
1a35600f50 Add method to get read count from individual wells 2022-09-28 12:57:45 -05:00
eugenefischer
856063529b Read depth simulation is now compatible with plate caching 2022-09-28 12:47:00 -05:00
eugenefischer
b7c86f20b3 Add read depth attributes to graphml output 2022-09-28 03:01:52 -05:00
eugenefischer
3a47efd361 Update TODO 2022-09-28 03:01:03 -05:00
eugenefischer
58bb04c431 Remove redundant toString() calls 2022-09-28 02:08:17 -05:00
eugenefischer
610da68262 Refactor Vertex class to use SequenceRecords 2022-09-28 00:58:44 -05:00
eugenefischer
9973473cc6 Make serializable and implement getWellOccupancies method 2022-09-28 00:58:02 -05:00
eugenefischer
8781afd74c Reorder conditional 2022-09-28 00:57:06 -05:00
eugenefischer
88b6c79caa Refactor to simplify graph creation code 2022-09-28 00:07:59 -05:00
eugenefischer
35a519d499 update TODO 2022-09-27 22:20:57 -05:00
9 changed files with 375 additions and 370 deletions

View File

@@ -96,7 +96,7 @@ These files are often generated in sequence. When entering filenames, it is not
(.csv or .ser). When reading or writing files, the program will automatically add the correct extension to any filename (.csv or .ser). When reading or writing files, the program will automatically add the correct extension to any filename
without one. without one.
To save file I/O time, the most recent instance of each of these four To save file I/O time when using the interactive interface, the most recent instance of each of these four
files either generated or read from disk can be cached in program memory. When caching is active, subsequent uses of the files either generated or read from disk can be cached in program memory. When caching is active, subsequent uses of the
same data file won't need to be read in again until another file of that type is used or generated, same data file won't need to be read in again until another file of that type is used or generated,
or caching is turned off for that file type. The program checks whether it needs to update its cached data by comparing or caching is turned off for that file type. The program checks whether it needs to update its cached data by comparing
@@ -160,7 +160,7 @@ Options when making a Sample Plate file:
* Number of sections on plate * Number of sections on plate
* Number of T cells per well * Number of T cells per well
* per section, if more than one section * per section, if more than one section
* Dropout rate * Sequence dropout rate
Files are in CSV format. There are no header labels. Every row represents a well. Files are in CSV format. There are no header labels. Every row represents a well.
Every value represents an individual cell, containing four sequences, depicted as an array string: Every value represents an individual cell, containing four sequences, depicted as an array string:
@@ -200,6 +200,11 @@ then use it for multiple different BiGpairSEQ simulations.
Options for creating a Graph/Data file: Options for creating a Graph/Data file:
* The Cell Sample file to use * The Cell Sample file to use
* The Sample Plate file to use. (This must have been generated from the selected Cell Sample file.) * The Sample Plate file to use. (This must have been generated from the selected Cell Sample file.)
* Whether to simulate sequence read depth. If simulated:
* The read depth (number of times each sequence is read)
* The read error rate (probability a sequence is misread)
* The error collision rate (probability two misreads produce the same spurious sequence)
* The real sequence collision rate (probability that a misread will produce a different, real sequence from the sample plate. Only applies to new misreads; once an error of this type has occurred, it's likelihood of ocurring again is dominated by the error collision probability.)
These files do not have a human-readable structure, and are not portable to other programs. These files do not have a human-readable structure, and are not portable to other programs.
@@ -207,8 +212,8 @@ These files do not have a human-readable structure, and are not portable to othe
For portability of graph data to other software, turn on [GraphML](http://graphml.graphdrawing.org/index.html) output For portability of graph data to other software, turn on [GraphML](http://graphml.graphdrawing.org/index.html) output
in the Options menu in interactive mode, or use the `-graphml`command line argument. This will produce a .graphml file in the Options menu in interactive mode, or use the `-graphml`command line argument. This will produce a .graphml file
for the weighted graph, with vertex attributes for sequence, type, and occupancy data. This graph contains all the data for the weighted graph, with vertex attributes for sequence, type, total occupancy, total read count, and the read count for every individual occupied well.
necessary for the BiGpairSEQ matching algorithm. It does not include the data to measure pairing accuracy; for that, This graph contains all the data necessary for the BiGpairSEQ matching algorithm. It does not include the data to measure pairing accuracy; for that,
compare the matching results to the original Cell Sample .csv file. compare the matching results to the original Cell Sample .csv file.
--- ---
@@ -265,9 +270,7 @@ P-values are calculated *after* BiGpairSEQ matching is completed, for purposes o
using the (2021 corrected) formula from the original pairSEQ paper. (Howie, et al. 2015) using the (2021 corrected) formula from the original pairSEQ paper. (Howie, et al. 2015)
## PERFORMANCE ## PERFORMANCE (old results; need updating to reflect current, improved simulator performance)
(NOTE: These results are from an older, less efficient version of the simulator, and need to be updated.)
On a home computer with a Ryzen 5600X CPU, 64GB of 3200MHz DDR4 RAM (half of which was allocated to the Java Virtual Machine), and a PCIe 3.0 SSD, running Linux Mint 20.3 Edge (5.13 kernel), On a home computer with a Ryzen 5600X CPU, 64GB of 3200MHz DDR4 RAM (half of which was allocated to the Java Virtual Machine), and a PCIe 3.0 SSD, running Linux Mint 20.3 Edge (5.13 kernel),
the author ran a BiGpairSEQ simulation of a 96-well sample plate with 30,000 T cells/well comprising ~11,800 alphas and betas, the author ran a BiGpairSEQ simulation of a 96-well sample plate with 30,000 T cells/well comprising ~11,800 alphas and betas,
@@ -342,24 +345,28 @@ roughly as though it had a constant well population equal to the plate's average
* ~~Add controllable heap-type parameter?~~ * ~~Add controllable heap-type parameter?~~
* Parameter implemented. Fibonacci heap the current default. * Parameter implemented. Fibonacci heap the current default.
* ~~Implement sample plates with random numbers of T cells per well.~~ DONE * ~~Implement sample plates with random numbers of T cells per well.~~ DONE
* Possible BiGpairSEQ advantage over pairSEQ: BiGpairSEQ is resilient to variations in well population sizes on a sample plate; pairSEQ is not. * Possible BiGpairSEQ advantage over pairSEQ: BiGpairSEQ is resilient to variations in well population sizes on a sample plate; pairSEQ is not due to nature of probability calculations.
* preliminary data suggests that BiGpairSEQ behaves roughly as though the whole plate had whatever the *average* well concentration is, but that's still speculative. * preliminary data suggests that BiGpairSEQ behaves roughly as though the whole plate had whatever the *average* well concentration is, but that's still speculative.
* See if there's a reasonable way to reformat Sample Plate files so that wells are columns instead of rows. * ~~See if there's a reasonable way to reformat Sample Plate files so that wells are columns instead of rows.~~
* ~~Problem is variable number of cells in a well~~ * ~~Problem is variable number of cells in a well~~
* ~~Apache Commons CSV library writes entries a row at a time~~ * ~~Apache Commons CSV library writes entries a row at a time~~
* _Got this working, but at the cost of a profoundly strange bug in graph occupancy filtering. Have reverted the repo until I can figure out what caused that. Given how easily Thingiverse transposes CSV matrices in R, might not even be worth fixing. * Got this working, but at the cost of a profoundly strange bug in graph occupancy filtering. Have reverted the repo until I can figure out what caused that. Given how easily Thingiverse transposes CSV matrices in R, might not even be worth fixing.
* ~~Enable GraphML output in addition to serialized object binaries, for data portability~~ DONE * ~~Enable GraphML output in addition to serialized object binaries, for data portability~~ DONE
* ~~Have a branch where this is implemented, but there's a bug that broke matching. Don't currently have time to fix.~~ * ~~Have a branch where this is implemented, but there's a bug that broke matching. Don't currently have time to fix.~~
* ~~Re-implement command line arguments, to enable scripting and statistical simulation studies~~ DONE * ~~Re-implement command line arguments, to enable scripting and statistical simulation studies~~ DONE
* ~~Implement custom Vertex class to simplify code and make it easier to implement different MWM algorithms~~ DONE * ~~Implement custom Vertex class to simplify code and make it easier to implement different MWM algorithms~~ DONE
* Advantage: would eliminate the need to use maps to associate vertices with sequences, which would make the code easier to understand. * Advantage: would eliminate the need to use maps to associate vertices with sequences, which would make the code easier to understand.
* This also seems to be faster when using the same algorithm than the version with lots of maps, which is a nice bonus! * This also seems to be faster when using the same algorithm than the version with lots of maps, which is a nice bonus!
* Re-implement CDR1 matching method
* ~~Implement simulation of read depth, and of read errors. Pre-filter graph for difference in read count to eliminate spurious sequences.~~ DONE * ~~Implement simulation of read depth, and of read errors. Pre-filter graph for difference in read count to eliminate spurious sequences.~~ DONE
* Pre-filtering based on comparing (read depth) * (occupancy) to (read count) for each sequence works extremely well * Pre-filtering based on comparing (read depth) * (occupancy) to (read count) for each sequence works extremely well
* ~~Add read depth simulation options to CLI~~ DONE * ~~Add read depth simulation options to CLI~~ DONE
* ~~Update graphml output to reflect current Vertex class attributes~~ DONE
* Individual well data from the SequenceRecords could be included, if there's ever a reason for it
* ~~Implement simulation of sequences being misread as other real sequence~~ DONE
* Update matching metadata output options in CLI * Update matching metadata output options in CLI
* Update performance data in this readme * Update performance data in this readme
* Add section to ReadMe describing data filtering methods.
* Re-implement CDR1 matching method
* Refactor simulator code to collect all needed data in a single scan of the plate * Refactor simulator code to collect all needed data in a single scan of the plate
* Currently it scans once for the vertices and then again for the edge weights. This made simulating read depth awkward, and incompatible with caching of plate files. * Currently it scans once for the vertices and then again for the edge weights. This made simulating read depth awkward, and incompatible with caching of plate files.
* This would be a fairly major rewrite of the simulator code, but could make things faster, and would definitely make them cleaner. * This would be a fairly major rewrite of the simulator code, but could make things faster, and would definitely make them cleaner.

View File

@@ -150,16 +150,21 @@ public class CommandLineInterface {
Integer readDepth = 1; Integer readDepth = 1;
Double readErrorRate = 0.0; Double readErrorRate = 0.0;
Double errorCollisionRate = 0.0; Double errorCollisionRate = 0.0;
Double realSequenceCollisionRate = 0.0;
if (line.hasOption("rd")) { if (line.hasOption("rd")) {
readDepth = Integer.parseInt(line.getOptionValue("rd")); readDepth = Integer.parseInt(line.getOptionValue("rd"));
} }
if (line.hasOption("err")) { if (line.hasOption("err")) {
readErrorRate = Double.parseDouble(line.getOptionValue("err")); readErrorRate = Double.parseDouble(line.getOptionValue("err"));
} }
if (line.hasOption("coll")) { if (line.hasOption("errcoll")) {
errorCollisionRate = Double.parseDouble(line.getOptionValue("coll")); errorCollisionRate = Double.parseDouble(line.getOptionValue("errcoll"));
} }
graph = Simulator.makeGraph(cells, plate, readDepth, readErrorRate, errorCollisionRate, false); if (line.hasOption("realcoll")) {
realSequenceCollisionRate = Double.parseDouble(line.getOptionValue("realcoll"));
}
graph = Simulator.makeCDR3Graph(cells, plate, readDepth, readErrorRate, errorCollisionRate,
realSequenceCollisionRate, false);
if (!line.hasOption("no-binary")) { //output binary file unless told not to if (!line.hasOption("no-binary")) { //output binary file unless told not to
GraphDataObjectWriter writer = new GraphDataObjectWriter(outputFilename, graph, false); GraphDataObjectWriter writer = new GraphDataObjectWriter(outputFilename, graph, false);
writer.writeDataToFile(); writer.writeDataToFile();
@@ -413,12 +418,20 @@ public class CommandLineInterface {
.hasArg() .hasArg()
.argName("prob") .argName("prob")
.build(); .build();
Option errorCollisionProb = Option.builder("coll") Option errorCollisionProb = Option.builder("errcoll")
.longOpt("error-collision-prob") .longOpt("error-collision-prob")
.desc("(Optional) The probability that two misreads will produce the same spurious sequence. (0.0 - 1.0)") .desc("(Optional) The probability that two misreads will produce the same spurious sequence. (0.0 - 1.0)")
.hasArg() .hasArg()
.argName("prob") .argName("prob")
.build(); .build();
Option realSequenceCollisionProb = Option.builder("realcoll")
.longOpt("real-collision-prob")
.desc("(Optional) The probability that a sequence will be misread " +
"as another real sequence. (Only applies to unique misreads; after this has happened once, " +
"future error collisions could produce the real sequence again) (0.0 - 1.0)")
.hasArg()
.argName("prob")
.build();
graphOptions.addOption(cellFilename); graphOptions.addOption(cellFilename);
graphOptions.addOption(plateFilename); graphOptions.addOption(plateFilename);
graphOptions.addOption(outputFileOption()); graphOptions.addOption(outputFileOption());
@@ -427,6 +440,7 @@ public class CommandLineInterface {
graphOptions.addOption(readDepth); graphOptions.addOption(readDepth);
graphOptions.addOption(readErrorProb); graphOptions.addOption(readErrorProb);
graphOptions.addOption(errorCollisionProb); graphOptions.addOption(errorCollisionProb);
graphOptions.addOption(realSequenceCollisionProb);
return graphOptions; return graphOptions;
} }

View File

@@ -5,7 +5,6 @@ import org.jgrapht.nio.AttributeType;
import org.jgrapht.nio.DefaultAttribute; import org.jgrapht.nio.DefaultAttribute;
import org.jgrapht.nio.graphml.GraphMLExporter; import org.jgrapht.nio.graphml.GraphMLExporter;
import org.jgrapht.nio.graphml.GraphMLExporter.AttributeCategory; import org.jgrapht.nio.graphml.GraphMLExporter.AttributeCategory;
import org.w3c.dom.Attr;
import java.io.BufferedWriter; import java.io.BufferedWriter;
import java.io.IOException; import java.io.IOException;
@@ -13,6 +12,7 @@ import java.nio.file.Files;
import java.nio.file.Path; import java.nio.file.Path;
import java.nio.file.StandardOpenOption; import java.nio.file.StandardOpenOption;
import java.util.HashMap; import java.util.HashMap;
import java.util.Iterator;
import java.util.Map; import java.util.Map;
public class GraphMLFileWriter { public class GraphMLFileWriter {
@@ -41,11 +41,11 @@ public class GraphMLFileWriter {
} }
private Map<String, Attribute> createGraphAttributes(){ private Map<String, Attribute> createGraphAttributes(){
Map<String, Attribute> ga = new HashMap<>(); Map<String, Attribute> attributes = new HashMap<>();
//Sample plate filename //Sample plate filename
ga.put("sample plate filename", DefaultAttribute.createAttribute(data.getSourceFilename())); attributes.put("sample plate filename", DefaultAttribute.createAttribute(data.getSourceFilename()));
// Number of wells // Number of wells
ga.put("well count", DefaultAttribute.createAttribute(data.getNumWells().toString())); attributes.put("well count", DefaultAttribute.createAttribute(data.getNumWells().toString()));
//Well populations //Well populations
Integer[] wellPopulations = data.getWellPopulations(); Integer[] wellPopulations = data.getWellPopulations();
StringBuilder populationsStringBuilder = new StringBuilder(); StringBuilder populationsStringBuilder = new StringBuilder();
@@ -55,8 +55,37 @@ public class GraphMLFileWriter {
populationsStringBuilder.append(wellPopulations[i].toString()); populationsStringBuilder.append(wellPopulations[i].toString());
} }
String wellPopulationsString = populationsStringBuilder.toString(); String wellPopulationsString = populationsStringBuilder.toString();
ga.put("well populations", DefaultAttribute.createAttribute(wellPopulationsString)); attributes.put("well populations", DefaultAttribute.createAttribute(wellPopulationsString));
return ga; attributes.put("read depth", DefaultAttribute.createAttribute(data.getReadDepth().toString()));
attributes.put("read error rate", DefaultAttribute.createAttribute(data.getReadErrorRate().toString()));
attributes.put("error collision rate", DefaultAttribute.createAttribute(data.getErrorCollisionRate().toString()));
attributes.put("real sequence collision rate", DefaultAttribute.createAttribute(data.getRealSequenceCollisionRate()));
return attributes;
}
private Map<String, Attribute> createVertexAttributes(Vertex v){
Map<String, Attribute> attributes = new HashMap<>();
//sequence type
attributes.put("type", DefaultAttribute.createAttribute(v.getType().name()));
//sequence
attributes.put("sequence", DefaultAttribute.createAttribute(v.getSequence()));
//number of wells the sequence appears in
attributes.put("occupancy", DefaultAttribute.createAttribute(v.getOccupancy()));
//total number of times the sequence was read
attributes.put("total read count", DefaultAttribute.createAttribute(v.getReadCount()));
StringBuilder wellsAndReadCountsBuilder = new StringBuilder();
Iterator<Map.Entry<Integer, Integer>> wellOccupancies = v.getWellOccupancies().entrySet().iterator();
while (wellOccupancies.hasNext()) {
Map.Entry<Integer, Integer> entry = wellOccupancies.next();
wellsAndReadCountsBuilder.append(entry.getKey() + ":" + entry.getValue());
if (wellOccupancies.hasNext()) {
wellsAndReadCountsBuilder.append(", ");
}
}
String wellsAndReadCounts = wellsAndReadCountsBuilder.toString();
//the wells the sequence appears in and the read counts in those wells
attributes.put("wells:read counts", DefaultAttribute.createAttribute(wellsAndReadCounts));
return attributes;
} }
public void writeGraphToFile() { public void writeGraphToFile() {
@@ -69,14 +98,7 @@ public class GraphMLFileWriter {
//Set graph attributes //Set graph attributes
exporter.setGraphAttributeProvider( () -> graphAttributes); exporter.setGraphAttributeProvider( () -> graphAttributes);
//set type, sequence, and occupancy attributes for each vertex //set type, sequence, and occupancy attributes for each vertex
//NEED TO ADD NEW FIELD FOR READ COUNT exporter.setVertexAttributeProvider(this::createVertexAttributes);
exporter.setVertexAttributeProvider( v -> {
Map<String, Attribute> attributes = new HashMap<>();
attributes.put("type", DefaultAttribute.createAttribute(v.getType().name()));
attributes.put("sequence", DefaultAttribute.createAttribute(v.getSequence()));
attributes.put("occupancy", DefaultAttribute.createAttribute(v.getOccupancy()));
return attributes;
});
//register the attributes //register the attributes
for(String s : graphAttributes.keySet()) { for(String s : graphAttributes.keySet()) {
exporter.registerAttribute(s, AttributeCategory.GRAPH, AttributeType.STRING); exporter.registerAttribute(s, AttributeCategory.GRAPH, AttributeType.STRING);
@@ -84,6 +106,8 @@ public class GraphMLFileWriter {
exporter.registerAttribute("type", AttributeCategory.NODE, AttributeType.STRING); exporter.registerAttribute("type", AttributeCategory.NODE, AttributeType.STRING);
exporter.registerAttribute("sequence", AttributeCategory.NODE, AttributeType.STRING); exporter.registerAttribute("sequence", AttributeCategory.NODE, AttributeType.STRING);
exporter.registerAttribute("occupancy", AttributeCategory.NODE, AttributeType.STRING); exporter.registerAttribute("occupancy", AttributeCategory.NODE, AttributeType.STRING);
exporter.registerAttribute("total read count", AttributeCategory.NODE, AttributeType.STRING);
exporter.registerAttribute("wells:read counts", AttributeCategory.NODE, AttributeType.STRING);
//export the graph //export the graph
exporter.exportGraph(graph, writer); exporter.exportGraph(graph, writer);
} catch(IOException ex){ } catch(IOException ex){

View File

@@ -11,13 +11,14 @@ public class GraphWithMapData implements java.io.Serializable {
private String sourceFilename; private String sourceFilename;
private final SimpleWeightedGraph graph; private final SimpleWeightedGraph graph;
private Integer numWells; private final int numWells;
private Integer[] wellPopulations; private final Integer[] wellPopulations;
private Integer alphaCount; private final int alphaCount;
private Integer betaCount; private final int betaCount;
private int readDepth; private final int readDepth;
private double readErrorRate; private final double readErrorRate;
private double errorCollisionRate; private final double errorCollisionRate;
private final double realSequenceCollisionRate;
private final Map<String, String> distCellsMapAlphaKey; private final Map<String, String> distCellsMapAlphaKey;
// private final Map<Integer, Integer> plateVtoAMap; // private final Map<Integer, Integer> plateVtoAMap;
// private final Map<Integer, Integer> plateVtoBMap; // private final Map<Integer, Integer> plateVtoBMap;
@@ -29,7 +30,8 @@ public class GraphWithMapData implements java.io.Serializable {
public GraphWithMapData(SimpleWeightedGraph graph, Integer numWells, Integer[] wellConcentrations, public GraphWithMapData(SimpleWeightedGraph graph, Integer numWells, Integer[] wellConcentrations,
Map<String, String> distCellsMapAlphaKey, Integer alphaCount, Integer betaCount, Map<String, String> distCellsMapAlphaKey, Integer alphaCount, Integer betaCount,
Integer readDepth, Double readErrorRate, Double errorCollisionRate, Duration time){ Integer readDepth, Double readErrorRate, Double errorCollisionRate,
Double realSequenceCollisionRate, Duration time){
// Map<Integer, Integer> plateVtoAMap, // Map<Integer, Integer> plateVtoAMap,
// Map<Integer,Integer> plateVtoBMap, Map<Integer, Integer> plateAtoVMap, // Map<Integer,Integer> plateVtoBMap, Map<Integer, Integer> plateAtoVMap,
@@ -50,6 +52,7 @@ public class GraphWithMapData implements java.io.Serializable {
this.readDepth = readDepth; this.readDepth = readDepth;
this.readErrorRate = readErrorRate; this.readErrorRate = readErrorRate;
this.errorCollisionRate = errorCollisionRate; this.errorCollisionRate = errorCollisionRate;
this.realSequenceCollisionRate = realSequenceCollisionRate;
this.time = time; this.time = time;
} }
@@ -122,4 +125,6 @@ public class GraphWithMapData implements java.io.Serializable {
public Double getErrorCollisionRate() { public Double getErrorCollisionRate() {
return errorCollisionRate; return errorCollisionRate;
} }
public Double getRealSequenceCollisionRate() { return realSequenceCollisionRate; }
} }

View File

@@ -255,6 +255,7 @@ public class InteractiveInterface {
int readDepth = 1; int readDepth = 1;
double readErrorRate = 0.0; double readErrorRate = 0.0;
double errorCollisionRate = 0.0; double errorCollisionRate = 0.0;
double realSequenceCollisionRate = 0.0;
try { try {
String str = "\nGenerating bipartite weighted graph encoding occupancy overlap data "; String str = "\nGenerating bipartite weighted graph encoding occupancy overlap data ";
str = str.concat("\nrequires a cell sample file and a sample plate file."); str = str.concat("\nrequires a cell sample file and a sample plate file.");
@@ -264,7 +265,6 @@ public class InteractiveInterface {
System.out.print("\nPlease enter name of an existing sample plate file: "); System.out.print("\nPlease enter name of an existing sample plate file: ");
plateFile = sc.next(); plateFile = sc.next();
System.out.println("\nEnable simulation of sequence read depth and sequence read errors? (y/n)"); System.out.println("\nEnable simulation of sequence read depth and sequence read errors? (y/n)");
System.out.println("NOTE: sample plate data cannot be cached when simulating read errors");
String ans = sc.next(); String ans = sc.next();
Pattern pattern = Pattern.compile("(?:yes|y)", Pattern.CASE_INSENSITIVE); Pattern pattern = Pattern.compile("(?:yes|y)", Pattern.CASE_INSENSITIVE);
Matcher matcher = pattern.matcher(ans); Matcher matcher = pattern.matcher(ans);
@@ -272,25 +272,29 @@ public class InteractiveInterface {
simulateReadDepth = true; simulateReadDepth = true;
} }
if (simulateReadDepth) { if (simulateReadDepth) {
BiGpairSEQ.setCachePlate(false); System.out.print("\nPlease enter the read depth (the integer number of times a sequence is read): ");
BiGpairSEQ.clearPlateInMemory();
System.out.print("\nPlease enter read depth (the integer number of reads per sequence): ");
readDepth = sc.nextInt(); readDepth = sc.nextInt();
if(readDepth < 1) { if(readDepth < 1) {
throw new InputMismatchException("The read depth must be an integer >= 1"); throw new InputMismatchException("The read depth must be an integer >= 1");
} }
System.out.print("\nPlease enter probability of a sequence read error (0.0 to 1.0): "); System.out.println("\nPlease enter the read error probability (0.0 to 1.0)");
System.out.print("(The probability that a sequence will be misread): ");
readErrorRate = sc.nextDouble(); readErrorRate = sc.nextDouble();
if(readErrorRate < 0.0 || readErrorRate > 1.0) { if(readErrorRate < 0.0 || readErrorRate > 1.0) {
throw new InputMismatchException("The read error rate must be in the range [0.0, 1.0]"); throw new InputMismatchException("The read error probability must be in the range [0.0, 1.0]");
} }
System.out.println("\nPlease enter the probability of read error collision"); System.out.println("\nPlease enter the error collision probability (0.0 to 1.0)");
System.out.println("(the likelihood that two read errors produce the same spurious sequence)"); System.out.print("(The probability of a sequence being misread in a way it has been misread before): ");
System.out.print("(0.0 to 1.0): ");
errorCollisionRate = sc.nextDouble(); errorCollisionRate = sc.nextDouble();
if(errorCollisionRate < 0.0 || errorCollisionRate > 1.0) { if(errorCollisionRate < 0.0 || errorCollisionRate > 1.0) {
throw new InputMismatchException("The error collision probability must be an in the range [0.0, 1.0]"); throw new InputMismatchException("The error collision probability must be an in the range [0.0, 1.0]");
} }
System.out.println("\nPlease enter the real sequence collision probability (0.0 to 1.0)");
System.out.print("(The probability that a (non-collision) misread produces a different, real sequence): ");
realSequenceCollisionRate = sc.nextDouble();
if(realSequenceCollisionRate < 0.0 || realSequenceCollisionRate > 1.0) {
throw new InputMismatchException("The real sequence collision probability must be an in the range [0.0, 1.0]");
}
} }
System.out.println("\nThe graph and occupancy data will be written to a file."); System.out.println("\nThe graph and occupancy data will be written to a file.");
System.out.print("Please enter a name for the output file: "); System.out.print("Please enter a name for the output file: ");
@@ -338,7 +342,8 @@ public class InteractiveInterface {
System.out.println("Returning to main menu."); System.out.println("Returning to main menu.");
} }
else{ else{
GraphWithMapData data = Simulator.makeGraph(cellSample, plate, readDepth, readErrorRate, errorCollisionRate, true); GraphWithMapData data = Simulator.makeCDR3Graph(cellSample, plate, readDepth, readErrorRate,
errorCollisionRate, realSequenceCollisionRate, true);
assert filename != null; assert filename != null;
if(BiGpairSEQ.outputBinary()) { if(BiGpairSEQ.outputBinary()) {
GraphDataObjectWriter dataWriter = new GraphDataObjectWriter(filename, data); GraphDataObjectWriter dataWriter = new GraphDataObjectWriter(filename, data);

View File

@@ -2,6 +2,13 @@
/* /*
TODO: Implement exponential distribution using inversion method - DONE TODO: Implement exponential distribution using inversion method - DONE
TODO: Implement collisions with real sequences by having the counting function keep a map of all sequences it's read,
with values of all misreads. Can then have a spurious/real collision rate, which will have count randomly select a sequence
it's already read at least once, and put that into the list of spurious sequences for the given real sequence. Will let me get rid
of the distinctMisreadCount map, and use this new map instead. Doing it this way, once a sequence has been misread as another
sequence once, it is more likely to be misread that way again, as future read error collisions can also be real sequence collisions
Prob A: a read error occurs. Prob B: it's a new error (otherwise it's a repeated error). Prob C: if new error, prob that it's
a real sequence collision (otherwise it's a new spurious sequence) - DONE
TODO: Implement discrete frequency distributions using Vose's Alias Method TODO: Implement discrete frequency distributions using Vose's Alias Method
*/ */
@@ -148,125 +155,103 @@ public class Plate {
return wells; return wells;
} }
//returns a map of the counts of the sequence at cell index sIndex, in all wells //For the sequences at cell indices sIndices, counts number of unique sequences in all wells.
public void assayWellsSequenceS(Map<String, Integer> sequences, int... sIndices){ //Also simulates sequence read errors with given probabilities.
this.assayWellsSequenceS(sequences, 0, size, sIndices); //Returns a map of SequenceRecords containing plate data for all sequences read.
} //TODO actually implement usage of misreadSequences - DONE
public Map<String, SequenceRecord> countSequences(Integer readDepth, Double readErrorRate,
//returns a map of the counts of the sequence at cell index sIndex, in a specific well Double errorCollisionRate, Double realSequenceCollisionRate, int... sIndices) {
public void assayWellsSequenceS(Map<String, Integer> sequences, int n, int... sIndices) { SequenceType[] sequenceTypes = EnumSet.allOf(SequenceType.class).toArray(new SequenceType[0]);
this.assayWellsSequenceS(sequences, n, n+1, sIndices); //Map of all real sequences read. Keys are sequences, values are ways sequence has been misread.
} Map<String, List<String>> sequencesAndMisreads = new HashMap<>();
//Map of all sequences read. Keys are sequences, values are associated SequenceRecords
//returns a map of the counts of the sequence at cell index sIndex, in a range of wells Map<String, SequenceRecord> sequenceMap = new LinkedHashMap<>();
public void assayWellsSequenceS(Map<String, Integer> sequences, int start, int end, int... sIndices) { //get list of all distinct, real sequences
for(int sIndex: sIndices){ String[] realSequences = assayWells(sIndices).toArray(new String[0]);
for(int i = start; i < end; i++){ for (int well = 0; well < size; well++) {
countSequences(sequences, wells.get(i), sIndex); for (String[] cell: wells.get(well)) {
} for (int sIndex: sIndices) {
} //the sequence being read
} String currentSequence = cell[sIndex];
//For the sequences at cell indices sIndices, counts number of unique sequences in the given well into the given map
private void countSequences(Map<String, Integer> wellMap, List<String[]> well, int... sIndices) {
for(String[] cell : well) {
for(int sIndex: sIndices){
//skip dropout sequences, which have value -1 //skip dropout sequences, which have value -1
if(!"-1".equals(cell[sIndex])){ if (!"-1".equals(currentSequence)) {
wellMap.merge(cell[sIndex], 1, (oldValue, newValue) -> oldValue + newValue); //keep rereading the sequence until the read depth is reached
for (int j = 0; j < readDepth; j++) {
//The sequence is misread
if (rand.nextDouble() < readErrorRate) {
//The sequence hasn't been read or misread before
if (!sequencesAndMisreads.containsKey(currentSequence)) {
sequencesAndMisreads.put(currentSequence, new ArrayList<>());
} }
} //The specific misread hasn't happened before
} if (rand.nextDouble() >= errorCollisionRate || sequencesAndMisreads.get(currentSequence).size() == 0) {
} //The misread doesn't collide with a real sequence already on the plate and some sequences have already been read
if(rand.nextDouble() >= realSequenceCollisionRate || !sequenceMap.isEmpty()){
//returns a map of the counts of the sequence at cell index sIndex, in all wells StringBuilder spurious = new StringBuilder(currentSequence);
//Simulates read depth and read errors, counts the number of reads of a unique sequence into the given map. for (int k = 0; k <= sequencesAndMisreads.get(currentSequence).size(); k++) {
public void assayWellsSequenceSWithReadDepth(Map<String, Integer> misreadCounts, Map<String, Integer> occupancyMap, Map<String, Integer> readCountMap,
int readDepth, double readErrorProb, double errorCollisionProb, int... sIndices) {
this.assayWellsSequenceSWithReadDepth(misreadCounts, occupancyMap, readCountMap, readDepth, readErrorProb, errorCollisionProb, 0, size, sIndices);
}
//returns a map of the counts of the sequence at cell index sIndex, in a specific of wells
//Simulates read depth and read errors, counts the number of reads of a unique sequence into the given map.
public void assayWellsSequenceSWithReadDepth(Map<String, Integer> misreadCounts, Map<String, Integer> occupancyMap, Map<String, Integer> readCountMap,
int readDepth, double readErrorProb, double errorCollisionProb,
int n, int... sIndices) {
this.assayWellsSequenceSWithReadDepth(misreadCounts, occupancyMap, readCountMap, readDepth, readErrorProb, errorCollisionProb, n, n+1, sIndices);
}
//returns a map of the counts of the sequence at cell index sIndex, in a range of wells
//Simulates read depth and read errors, counts the number of reads of a unique sequence into the given map.
public void assayWellsSequenceSWithReadDepth(Map<String, Integer> misreadCounts, Map<String, Integer> occupancyMap, Map<String, Integer> readCountMap,
int readDepth, double readErrorProb, double errorCollisionProb,
int start, int end, int... sIndices) {
for(int sIndex: sIndices){
for(int i = start; i < end; i++){
countSequencesWithReadDepth(misreadCounts, occupancyMap, readCountMap, readDepth, readErrorProb, errorCollisionProb, wells.get(i), sIndex);
}
}
}
//For the sequences at cell indices sIndices, counts number of unique sequences in the given well into the given map
//Simulates read depth and read errors, counts the number of reads of a unique sequence into the given map.
//NOTE: this function changes the content of the well, adding spurious cells to contain the misread sequences
//(this is necessary because, in the simulation, the plate is read multiple times, but random misreads can only
//be simulated once).
//(Possibly I should refactor all of this to only require a single plate assay, to speed things up. Or at least
//to see if it would speed things up.)
private void countSequencesWithReadDepth(Map<String, Integer> distinctMisreadCounts, Map<String, Integer> occupancyMap, Map<String, Integer> readCountMap,
int readDepth, double readErrorProb, double errorCollisionProb,
List<String[]> well, int... sIndices) {
//list of spurious cells to add to well after counting
List<String[]> spuriousCells = new ArrayList<>();
for(String[] cell : well) {
//new potential spurious cell for each cell that gets read
String[] spuriousCell = new String[SequenceType.values().length];
//initialize spurious cell with all dropout sequences
Arrays.fill(spuriousCell, "-1");
//has a read error occurred?
boolean readError = false;
for(int sIndex: sIndices){
//skip dropout sequences, which have value "-1"
if(!"-1".equals(cell[sIndex])){
Map<String, Integer> sequencesWithReadCounts = new LinkedHashMap<>();
for(int i = 0; i < readDepth; i++) {
if (rand.nextDouble() <= readErrorProb) {
readError = true;
//Read errors are represented by appending "*"s to the end of the sequence some number of times
StringBuilder spurious = new StringBuilder(cell[sIndex]);
//if this sequence hasn't been misread before, or the read error is unique,
//append one more "*" than has been appended before
if (!distinctMisreadCounts.containsKey(cell[sIndex]) || rand.nextDouble() > errorCollisionProb) {
distinctMisreadCounts.merge(cell[sIndex], 1, (oldValue, newValue) -> oldValue + newValue);
for (int j = 0; j < distinctMisreadCounts.get(cell[sIndex]); j++) {
spurious.append("*"); spurious.append("*");
} }
//New sequence record for the spurious sequence
SequenceRecord tmp = new SequenceRecord(spurious.toString(), sequenceTypes[sIndex]);
tmp.addRead(well);
sequenceMap.put(spurious.toString(), tmp);
//add spurious sequence to list of misreads for the real sequence
sequencesAndMisreads.get(currentSequence).add(spurious.toString());
} }
//if this is a read error collision, randomly choose a number of "*"s that has been appended before //The misread collides with a real sequence already read from plate
else { else {
int starCount = rand.nextInt(distinctMisreadCounts.get(cell[sIndex])); String wrongSequence;
for (int j = 0; j < starCount; j++) { do{
spurious.append("*"); //get a random real sequence that's been read from the plate before
int index = rand.nextInt(realSequences.length);
wrongSequence = realSequences[index];
//make sure it's not accidentally the *right* sequence
//Also that it's not a wrong sequence already in the misread list
} while(currentSequence.equals(wrongSequence) || sequencesAndMisreads.get(currentSequence).contains(wrongSequence));
//update the SequenceRecord for wrongSequence
sequenceMap.get(wrongSequence).addRead(well);
//add wrongSequence to the misreads for currentSequence
sequencesAndMisreads.get(currentSequence).add(wrongSequence);
} }
} }
sequencesWithReadCounts.merge(spurious.toString(), 1, (oldValue, newValue) -> oldValue + newValue);
//add spurious sequence to spurious cell
spuriousCell[sIndex] = spurious.toString();
} }
//The sequence is read correctly
else { else {
sequencesWithReadCounts.merge(cell[sIndex], 1, (oldValue, newValue) -> oldValue + newValue); //the sequence hasn't been read before
if (!sequenceMap.containsKey(currentSequence)) {
//create new record for the sequence
SequenceRecord tmp = new SequenceRecord(currentSequence, sequenceTypes[sIndex]);
//add this read to the sequence record
tmp.addRead(well);
//add the sequence and its record to the sequence map
sequenceMap.put(currentSequence, tmp);
//add the sequence to the sequences and misreads map
sequencesAndMisreads.put(currentSequence, new ArrayList<>());
} }
} //the sequence has been read before
for(String seq : sequencesWithReadCounts.keySet()) { else {
occupancyMap.merge(seq, 1, (oldValue, newValue) -> oldValue + newValue); //get the sequence's record and add this read to it
readCountMap.merge(seq, sequencesWithReadCounts.get(seq), (oldValue, newValue) -> oldValue + newValue); sequenceMap.get(currentSequence).addRead(well);
} }
} }
} }
if (readError) { //only add a new spurious cell if there was a read error
spuriousCells.add(spuriousCell);
} }
} }
//add all spurious cells to the well }
well.addAll(spuriousCells); }
return sequenceMap;
}
private HashSet<String> assayWells(int[] indices) {
HashSet<String> allSequences = new HashSet<>();
for (List<String[]> well: wells) {
for (String[] cell: well) {
for(int index: indices) {
allSequences.add(cell[index]);
}
}
}
return allSequences;
} }
public String getSourceFileName() { public String getSourceFileName() {

View File

@@ -0,0 +1,65 @@
/*
Class to represent individual sequences, holding their well occupancy and read count information.
Will make a map of these keyed to the sequences themselves.
Ideally, I'll be able to construct both the Vertices and the weights matrix from this map.
*/
import java.io.Serializable;
import java.util.*;
public class SequenceRecord implements Serializable {
private final String sequence;
private final SequenceType type;
//keys are well numbers, values are read count in that well
private final Map<Integer, Integer> wells;
public SequenceRecord (String sequence, SequenceType type) {
this.sequence = sequence;
this.type = type;
this.wells = new LinkedHashMap<>();
}
//this shouldn't be necessary, since the sequence will be the map key, but
public String getSequence() {
return sequence;
}
public SequenceType getSequenceType(){
return type;
}
//use this to update the record for each new read
public void addRead(Integer wellNumber) {
wells.merge(wellNumber,1, Integer::sum);
}
//don't know if I'll ever need this
public void addWellData(Integer wellNumber, Integer readCount) {
wells.put(wellNumber, readCount);
}
public Set<Integer> getWells() {
return wells.keySet();
}
public Map<Integer, Integer> getWellOccupancies() { return wells;}
public boolean isInWell(Integer wellNumber) {
return wells.containsKey(wellNumber);
}
public Integer getOccupancy() {
return wells.size();
}
//read count for whole plate
public Integer getReadCount(){
return wells.values().stream().mapToInt(Integer::valueOf).sum();
}
//read count in a specific well
public Integer getReadCount(Integer wellNumber) {
return wells.get(wellNumber);
}
}

View File

@@ -12,78 +12,69 @@ import java.text.NumberFormat;
import java.time.Instant; import java.time.Instant;
import java.time.Duration; import java.time.Duration;
import java.util.*; import java.util.*;
/*
Refactor notes
What would be necessary to do everything with only one scan through the sample plate?
I would need to keep a list of sequences (real and spurious), and metadata about each sequence.
I would need the data:
* # of each well the sequence appears in
* Read count in that well
*/
//NOTE: "sequence" in method and variable names refers to a peptide sequence from a simulated T cell //NOTE: "sequence" in method and variable names refers to a peptide sequence from a simulated T cell
public class Simulator implements GraphModificationFunctions { public class Simulator implements GraphModificationFunctions {
//Make the graph needed for matching sequences. public static GraphWithMapData makeCDR3Graph(CellSample cellSample, Plate samplePlate, int readDepth,
//sourceVertexIndices and targetVertexIndices are indices within the cell to use as for the two sets of vertices double readErrorRate, double errorCollisionRate,
//in the bipartite graph. "Source" and "target" are JGraphT terms for the two vertices an edge touches, double realSequenceCollisionRate, boolean verbose) {
//even if not directed. //start timing
public static GraphWithMapData makeGraph(CellSample cellSample, Plate samplePlate, int readDepth, double readErrorRate, double errorCollisionRate, boolean verbose) {
Instant start = Instant.now(); Instant start = Instant.now();
Map<String, Integer> allAlphas;
Map<String, Integer> allBetas;
Map<String, Integer> alphaReadCounts = null;
Map<String, Integer> betaReadCounts = null;
Map<String, Integer> misreadCounts;
List<String[]> distinctCells = cellSample.getCells();
int[] alphaIndices = {SequenceType.CDR3_ALPHA.ordinal()}; int[] alphaIndices = {SequenceType.CDR3_ALPHA.ordinal()};
int[] betaIndices = {SequenceType.CDR3_BETA.ordinal()}; int[] betaIndices = {SequenceType.CDR3_BETA.ordinal()};
List<String[]> distinctCells = cellSample.getCells();
int numWells = samplePlate.getSize(); int numWells = samplePlate.getSize();
//Make a hashmap keyed to alphas, values are associated betas.
if(verbose){System.out.println("Making cell maps");} if(verbose){System.out.println("Making cell maps");}
//HashMap keyed to Alphas, values Betas Map<String, String> distCellsMapAlphaKey = makeSequenceToSequenceMap(distinctCells,
Map<String, String> distCellsMapAlphaKey = makeSequenceToSequenceMap(distinctCells, 0, 1); SequenceType.CDR3_ALPHA.ordinal(), SequenceType.CDR3_BETA.ordinal());
if(verbose){System.out.println("Cell maps made");} if(verbose){System.out.println("Cell maps made");}
if(verbose){System.out.println("Making well maps");} //Make linkedHashMap keyed to sequences, values are SequenceRecords reflecting plate statistics
if(readDepth == 1) { if(verbose){System.out.println("Making sample plate sequence maps");}
allAlphas = new HashMap<>(); Map<String, SequenceRecord> alphaSequences = samplePlate.countSequences(readDepth, readErrorRate,
samplePlate.assayWellsSequenceS(allAlphas, alphaIndices); errorCollisionRate, realSequenceCollisionRate, alphaIndices);
allBetas = new HashMap<>(); int alphaCount = alphaSequences.size();
samplePlate.assayWellsSequenceS(allBetas, betaIndices); if(verbose){System.out.println("Alphas sequences read: " + alphaCount);}
} Map<String, SequenceRecord> betaSequences = samplePlate.countSequences(readDepth, readErrorRate,
else { errorCollisionRate, realSequenceCollisionRate, betaIndices);
misreadCounts = new HashMap<>(); int betaCount = betaSequences.size();
allAlphas = new HashMap<>(); if(verbose){System.out.println("Betas sequences read: " + betaCount);}
alphaReadCounts = new HashMap<>(); if(verbose){System.out.println("Sample plate sequence maps made");}
samplePlate.assayWellsSequenceSWithReadDepth(misreadCounts, allAlphas, alphaReadCounts, readDepth, readErrorRate, errorCollisionRate, alphaIndices);
allBetas = new HashMap<>();
betaReadCounts = new HashMap<>();
samplePlate.assayWellsSequenceSWithReadDepth(misreadCounts, allBetas, betaReadCounts, readDepth, readErrorRate, errorCollisionRate, betaIndices);
}
int alphaCount = allAlphas.size(); //pre-filter saturating sequences and sequences likely to be misreads
if(verbose){System.out.println("All alphas count: " + alphaCount);}
int betaCount = allBetas.size();
if(verbose){System.out.println("All betas count: " + betaCount);}
if(verbose){System.out.println("Well maps made");}
//ideally we wouldn't do any graph pre-filtering. But sequences present in all wells add a huge number of edges to the graph and don't carry any signal value
if (readDepth == 1) {
if(verbose){System.out.println("Removing sequences present in all wells.");} if(verbose){System.out.println("Removing sequences present in all wells.");}
filterByOccupancyThresholds(allAlphas, 1, numWells - 1); filterByOccupancyThresholds(alphaSequences, 1, numWells - 1);
filterByOccupancyThresholds(allBetas, 1, numWells - 1); filterByOccupancyThresholds(betaSequences, 1, numWells - 1);
if(verbose){System.out.println("Sequences removed");}
}
else {
if(verbose){System.out.println("Removing sequences present in all wells.");}
filterByOccupancyThresholds(allAlphas, 1, numWells - 1);
filterByOccupancyThresholds(allBetas, 1, numWells - 1);
if(verbose){System.out.println("Sequences removed");} if(verbose){System.out.println("Sequences removed");}
if(verbose){System.out.println("Remaining alpha sequence count: " + alphaSequences.size());}
if(verbose){System.out.println("Remaining beta sequence count: " + betaSequences.size());}
if (readDepth > 1) {
if(verbose){System.out.println("Removing sequences with disparate occupancies and read counts");} if(verbose){System.out.println("Removing sequences with disparate occupancies and read counts");}
filterByOccupancyAndReadCount(allAlphas, alphaReadCounts, readDepth); filterByOccupancyAndReadCount(alphaSequences, readDepth);
filterByOccupancyAndReadCount(allBetas, betaReadCounts, readDepth); filterByOccupancyAndReadCount(betaSequences, readDepth);
if(verbose){System.out.println("Sequences removed");} if(verbose){System.out.println("Sequences removed");}
if(verbose){System.out.println("Remaining alpha sequence count: " + alphaSequences.size());}
if(verbose){System.out.println("Remaining beta sequence count: " + betaSequences.size());}
} }
int pairableAlphaCount = allAlphas.size(); int pairableAlphaCount = alphaSequences.size();
if(verbose){System.out.println("Remaining alphas count: " + pairableAlphaCount);} if(verbose){System.out.println("Remaining alpha sequence count: " + pairableAlphaCount);}
int pairableBetaCount = allBetas.size(); int pairableBetaCount = betaSequences.size();
if(verbose){System.out.println("Remaining betas count: " + pairableBetaCount);} if(verbose){System.out.println("Remaining beta sequence count: " + pairableBetaCount);}
//construct the graph. For simplicity, going to make
if(verbose){System.out.println("Making vertex maps");} if(verbose){System.out.println("Making vertex maps");}
//For the SimpleWeightedBipartiteGraphMatrixGenerator, all vertices must have //For the SimpleWeightedBipartiteGraphMatrixGenerator, all vertices must have
//distinct numbers associated with them. Since I'm using a 2D array, that means //distinct numbers associated with them. Since I'm using a 2D array, that means
@@ -92,50 +83,30 @@ public class Simulator implements GraphModificationFunctions {
//subtract the vertexStartValue from betas to use their vertex labels as array indices //subtract the vertexStartValue from betas to use their vertex labels as array indices
int vertexStartValue = 0; int vertexStartValue = 0;
//keys are sequential integer vertices, values are alphas //keys are sequential integer vertices, values are alphas
Map<Integer, String> plateVtoAMap = makeVertexToSequenceMap(allAlphas, vertexStartValue); Map<String, Integer> plateAtoVMap = makeSequenceToVertexMap(alphaSequences, vertexStartValue);
//new start value for vertex to beta map should be one more than final vertex value in alpha map //new start value for vertex to beta map should be one more than final vertex value in alpha map
vertexStartValue += plateVtoAMap.size(); vertexStartValue += plateAtoVMap.size();
//keys are sequential integers vertices, values are betas //keys are betas, values are sequential integers
Map<Integer, String> plateVtoBMap = makeVertexToSequenceMap(allBetas, vertexStartValue); Map<String, Integer> plateBtoVMap = makeSequenceToVertexMap(betaSequences, vertexStartValue);
//keys are alphas, values are sequential integer vertices from previous map
Map<String, Integer> plateAtoVMap = invertVertexMap(plateVtoAMap);
//keys are betas, values are sequential integer vertices from previous map
Map<String, Integer> plateBtoVMap = invertVertexMap(plateVtoBMap);
if(verbose){System.out.println("Vertex maps made");} if(verbose){System.out.println("Vertex maps made");}
//make adjacency matrix for bipartite graph generator //make adjacency matrix for bipartite graph generator
//(technically this is only 1/4 of an adjacency matrix, but that's all you need //(technically this is only 1/4 of an adjacency matrix, but that's all you need
//for a bipartite graph, and all the SimpleWeightedBipartiteGraphMatrixGenerator class expects.) //for a bipartite graph, and all the SimpleWeightedBipartiteGraphMatrixGenerator class expects.)
if(verbose){System.out.println("Creating adjacency matrix");} if(verbose){System.out.println("Making adjacency matrix");}
//Count how many wells each alpha sequence appears in double[][] weights = new double[plateAtoVMap.size()][plateBtoVMap.size()];
Map<String, Integer> alphaWellCounts = new HashMap<>(); fillAdjacencyMatrix(weights, vertexStartValue, alphaSequences, betaSequences, plateAtoVMap, plateBtoVMap);
//count how many wells each beta sequence appears in if(verbose){System.out.println("Adjacency matrix made");}
Map<String, Integer> betaWellCounts = new HashMap<>(); //make bipartite graph
//the adjacency matrix to be used by the graph generator if(verbose){System.out.println("Making bipartite weighted graph");}
double[][] weights = new double[plateVtoAMap.size()][plateVtoBMap.size()];
countSequencesAndFillMatrix(samplePlate, allAlphas, allBetas, plateAtoVMap,
plateBtoVMap, alphaIndices, betaIndices, alphaWellCounts, betaWellCounts, weights);
if(verbose){System.out.println("Matrix created");}
//create bipartite graph
if(verbose){System.out.println("Creating graph");}
//the graph object //the graph object
SimpleWeightedGraph<Vertex, DefaultWeightedEdge> graph = SimpleWeightedGraph<Vertex, DefaultWeightedEdge> graph =
new SimpleWeightedGraph<>(DefaultWeightedEdge.class); new SimpleWeightedGraph<>(DefaultWeightedEdge.class);
//the graph generator //the graph generator
SimpleWeightedBipartiteGraphMatrixGenerator graphGenerator = new SimpleWeightedBipartiteGraphMatrixGenerator(); SimpleWeightedBipartiteGraphMatrixGenerator graphGenerator = new SimpleWeightedBipartiteGraphMatrixGenerator();
//the list of alpha vertices //the list of alpha vertices
//List<Integer> alphaVertices = new ArrayList<>(plateVtoAMap.keySet()); //This will work because LinkedHashMap preserves order of entry
List<Vertex> alphaVertices = new ArrayList<>(); List<Vertex> alphaVertices = new ArrayList<>();
//start with map of all alphas mapped to vertex values, get occupancy from the alphaWellCounts map
for (String seq : plateAtoVMap.keySet()) { for (String seq : plateAtoVMap.keySet()) {
Vertex alphaVertex; Vertex alphaVertex = new Vertex(alphaSequences.get(seq), plateAtoVMap.get(seq));
if (readDepth == 1) {
alphaVertex = new Vertex(SequenceType.CDR3_ALPHA, seq, alphaWellCounts.get(seq), plateAtoVMap.get(seq));
}
else {
alphaVertex = new Vertex(SequenceType.CDR3_ALPHA, seq, alphaWellCounts.get(seq), plateAtoVMap.get(seq), alphaReadCounts.get(seq));
}
alphaVertices.add(alphaVertex); alphaVertices.add(alphaVertex);
} }
//Sort to make sure the order of vertices in list matches the order of the adjacency matrix //Sort to make sure the order of vertices in list matches the order of the adjacency matrix
@@ -143,16 +114,9 @@ public class Simulator implements GraphModificationFunctions {
//Add ordered list of vertices to the graph //Add ordered list of vertices to the graph
graphGenerator.first(alphaVertices); graphGenerator.first(alphaVertices);
//the list of beta vertices //the list of beta vertices
//List<Integer> betaVertices = new ArrayList<>(plateVtoBMap.keySet());//This will work because LinkedHashMap preserves order of entry
List<Vertex> betaVertices = new ArrayList<>(); List<Vertex> betaVertices = new ArrayList<>();
for (String seq : plateBtoVMap.keySet()) { for (String seq : plateBtoVMap.keySet()) {
Vertex betaVertex; Vertex betaVertex = new Vertex(betaSequences.get(seq), plateBtoVMap.get(seq));
if (readDepth == 1) {
betaVertex = new Vertex(SequenceType.CDR3_BETA, seq, betaWellCounts.get(seq), plateBtoVMap.get(seq));
}
else {
betaVertex = new Vertex(SequenceType.CDR3_BETA, seq, betaWellCounts.get(seq), plateBtoVMap.get(seq), betaReadCounts.get(seq));
}
betaVertices.add(betaVertex); betaVertices.add(betaVertex);
} }
//Sort to make sure the order of vertices in list matches the order of the adjacency matrix //Sort to make sure the order of vertices in list matches the order of the adjacency matrix
@@ -163,13 +127,12 @@ public class Simulator implements GraphModificationFunctions {
graphGenerator.weights(weights); graphGenerator.weights(weights);
graphGenerator.generateGraph(graph); graphGenerator.generateGraph(graph);
if(verbose){System.out.println("Graph created");} if(verbose){System.out.println("Graph created");}
//stop timing
Instant stop = Instant.now(); Instant stop = Instant.now();
Duration time = Duration.between(start, stop); Duration time = Duration.between(start, stop);
//create GraphWithMapData object //create GraphWithMapData object
GraphWithMapData output = new GraphWithMapData(graph, numWells, samplePlate.getPopulations(), distCellsMapAlphaKey, GraphWithMapData output = new GraphWithMapData(graph, numWells, samplePlate.getPopulations(), distCellsMapAlphaKey,
alphaCount, betaCount, readDepth, readErrorRate, errorCollisionRate, time); alphaCount, betaCount, readDepth, readErrorRate, errorCollisionRate, realSequenceCollisionRate, time);
//Set source file name in graph to name of sample plate //Set source file name in graph to name of sample plate
output.setSourceFilename(samplePlate.getFilename()); output.setSourceFilename(samplePlate.getFilename());
//return GraphWithMapData object //return GraphWithMapData object
@@ -273,12 +236,9 @@ public class Simulator implements GraphModificationFunctions {
e = weightIter.next(); e = weightIter.next();
Vertex source = graph.getEdgeSource(e); Vertex source = graph.getEdgeSource(e);
Vertex target = graph.getEdgeTarget(e); Vertex target = graph.getEdgeTarget(e);
//Integer source = graph.getEdgeSource(e);
//Integer target = graph.getEdgeTarget(e);
//The match map is all matches found, not just true matches! //The match map is all matches found, not just true matches!
matchMap.put(source.getSequence(), target.getSequence()); matchMap.put(source.getSequence(), target.getSequence());
check = target.getSequence().equals(distCellsMapAlphaKey.get(source.getSequence())); check = target.getSequence().equals(distCellsMapAlphaKey.get(source.getSequence()));
//check = plateVtoBMap.get(target).equals(distCellsMapAlphaKey.get(plateVtoAMap.get(source)));
if(check) { if(check) {
trueCount++; trueCount++;
} }
@@ -287,11 +247,11 @@ public class Simulator implements GraphModificationFunctions {
} }
List<String> result = new ArrayList<>(); List<String> result = new ArrayList<>();
//alpha sequence //alpha sequence
result.add(source.getSequence().toString()); result.add(source.getSequence());
//alpha well count //alpha well count
result.add(source.getOccupancy().toString()); result.add(source.getOccupancy().toString());
//beta sequence //beta sequence
result.add(target.getSequence().toString()); result.add(target.getSequence());
//beta well count //beta well count
result.add(target.getOccupancy().toString()); result.add(target.getOccupancy().toString());
//overlap count //overlap count
@@ -348,6 +308,7 @@ public class Simulator implements GraphModificationFunctions {
metadata.put("sequence read depth", data.getReadDepth().toString()); metadata.put("sequence read depth", data.getReadDepth().toString());
metadata.put("sequence read error rate", data.getReadErrorRate().toString()); metadata.put("sequence read error rate", data.getReadErrorRate().toString());
metadata.put("read error collision rate", data.getErrorCollisionRate().toString()); metadata.put("read error collision rate", data.getErrorCollisionRate().toString());
metadata.put("real sequence collision rate", data.getRealSequenceCollisionRate().toString());
metadata.put("total alphas read from plate", data.getAlphaCount().toString()); metadata.put("total alphas read from plate", data.getAlphaCount().toString());
metadata.put("total betas read from plate", data.getBetaCount().toString()); metadata.put("total betas read from plate", data.getBetaCount().toString());
//HARD CODED, PARAMETERIZE LATER //HARD CODED, PARAMETERIZE LATER
@@ -690,10 +651,10 @@ public class Simulator implements GraphModificationFunctions {
// } // }
//Remove sequences based on occupancy //Remove sequences based on occupancy
public static void filterByOccupancyThresholds(Map<String, Integer> wellMap, int low, int high){ public static void filterByOccupancyThresholds(Map<String, SequenceRecord> wellMap, int low, int high){
List<String> noise = new ArrayList<>(); List<String> noise = new ArrayList<>();
for(String k: wellMap.keySet()){ for(String k: wellMap.keySet()){
if((wellMap.get(k) > high) || (wellMap.get(k) < low)){ if((wellMap.get(k).getOccupancy() > high) || (wellMap.get(k).getOccupancy() < low)){
noise.add(k); noise.add(k);
} }
} }
@@ -702,13 +663,12 @@ public class Simulator implements GraphModificationFunctions {
} }
} }
public static void filterByOccupancyAndReadCount(Map<String, Integer> sequences, public static void filterByOccupancyAndReadCount(Map<String, SequenceRecord> sequences, int readDepth) {
Map<String, Integer> sequenceReadCounts, int readDepth) {
List<String> noise = new ArrayList<>(); List<String> noise = new ArrayList<>();
for(String k : sequences.keySet()){ for(String k : sequences.keySet()){
//occupancy times read depth should be more than half the sequence read count if the read error rate is low //occupancy times read depth should be more than half the sequence read count if the read error rate is low
Integer threshold = (sequences.get(k) * readDepth) / 2; Integer threshold = (sequences.get(k).getOccupancy() * readDepth) / 2;
if(sequenceReadCounts.get(k) < threshold) { if(sequences.get(k).getReadCount() < threshold) {
noise.add(k); noise.add(k);
} }
} }
@@ -717,50 +677,6 @@ public class Simulator implements GraphModificationFunctions {
} }
} }
//Counts the well occupancy of the row peptides and column peptides into given maps, and
//fills weights in the given 2D array
private static void countSequencesAndFillMatrix(Plate samplePlate,
Map<String, Integer> allRowSequences,
Map<String, Integer> allColumnSequences,
Map<String, Integer> rowSequenceToVertexMap,
Map<String, Integer> columnSequenceToVertexMap,
int[] rowSequenceIndices,
int[] colSequenceIndices,
Map<String, Integer> rowSequenceCounts,
Map<String, Integer> columnSequenceCounts,
double[][] weights){
Map<String, Integer> wellNRowSequences;
Map<String, Integer> wellNColumnSequences;
int vertexStartValue = rowSequenceToVertexMap.size();
int numWells = samplePlate.getSize();
for (int n = 0; n < numWells; n++) {
wellNRowSequences = new HashMap<>();
samplePlate.assayWellsSequenceS(wellNRowSequences, n, rowSequenceIndices);
for (String a : wellNRowSequences.keySet()) {
if(allRowSequences.containsKey(a)){
rowSequenceCounts.merge(a, 1, (oldValue, newValue) -> oldValue + newValue);
}
}
wellNColumnSequences = new HashMap<>();
samplePlate.assayWellsSequenceS(wellNColumnSequences, n, colSequenceIndices);
for (String b : wellNColumnSequences.keySet()) {
if(allColumnSequences.containsKey(b)){
columnSequenceCounts.merge(b, 1, (oldValue, newValue) -> oldValue + newValue);
}
}
for (String i : wellNRowSequences.keySet()) {
if(allRowSequences.containsKey(i)){
for (String j : wellNColumnSequences.keySet()) {
if(allColumnSequences.containsKey(j)){
weights[rowSequenceToVertexMap.get(i)][columnSequenceToVertexMap.get(j) - vertexStartValue] += 1.0;
}
}
}
}
}
}
private static Map<String, String> makeSequenceToSequenceMap(List<String[]> cells, int keySequenceIndex, private static Map<String, String> makeSequenceToSequenceMap(List<String[]> cells, int keySequenceIndex,
int valueSequenceIndex){ int valueSequenceIndex){
Map<String, String> keySequenceToValueSequenceMap = new HashMap<>(); Map<String, String> keySequenceToValueSequenceMap = new HashMap<>();
@@ -770,9 +686,9 @@ public class Simulator implements GraphModificationFunctions {
return keySequenceToValueSequenceMap; return keySequenceToValueSequenceMap;
} }
private static Map<Integer, String> makeVertexToSequenceMap(Map<String, Integer> sequences, Integer startValue) { private static Map<Integer, String> makeVertexToSequenceMap(Map<String, SequenceRecord> sequences, Integer startValue) {
Map<Integer, String> map = new LinkedHashMap<>(); //LinkedHashMap to preserve order of entry Map<Integer, String> map = new LinkedHashMap<>(); //LinkedHashMap to preserve order of entry
Integer index = startValue; //is this necessary? I don't think I use this. Integer index = startValue;
for (String k: sequences.keySet()) { for (String k: sequences.keySet()) {
map.put(index, k); map.put(index, k);
index++; index++;
@@ -780,6 +696,30 @@ public class Simulator implements GraphModificationFunctions {
return map; return map;
} }
private static Map<String, Integer> makeSequenceToVertexMap(Map<String, SequenceRecord> sequences, Integer startValue) {
Map<String, Integer> map = new LinkedHashMap<>(); //LinkedHashMap to preserve order of entry
Integer index = startValue;
for (String k: sequences.keySet()) {
map.put(k, index);
index++;
}
return map;
}
private static void fillAdjacencyMatrix(double[][] weights, Integer vertexOffsetValue, Map<String, SequenceRecord> rowSequences,
Map<String, SequenceRecord> columnSequences, Map<String, Integer> rowToVertexMap,
Map<String, Integer> columnToVertexMap) {
for (String rowSeq: rowSequences.keySet()) {
for (Integer well: rowSequences.get(rowSeq).getWells()) {
for (String colSeq: columnSequences.keySet()) {
if (columnSequences.get(colSeq).isInWell(well)) {
weights[rowToVertexMap.get(rowSeq)][columnToVertexMap.get(colSeq) - vertexOffsetValue] += 1.0;
}
}
}
}
}
private static Map<String, Integer> invertVertexMap(Map<Integer, String> map) { private static Map<String, Integer> invertVertexMap(Map<Integer, String> map) {
Map<String, Integer> inverse = new HashMap<>(); Map<String, Integer> inverse = new HashMap<>();
for (Integer k : map.keySet()) { for (Integer k : map.keySet()) {

View File

@@ -1,76 +1,45 @@
import org.jheaps.AddressableHeap; import org.jheaps.AddressableHeap;
import java.io.Serializable; import java.io.Serializable;
import java.util.Map;
public class Vertex implements Serializable, Comparable<Vertex> { public class Vertex implements Serializable, Comparable<Vertex> {
private SequenceType type; private SequenceRecord record;
private Integer vertexLabel; private Integer vertexLabel;
private String sequence;
private Integer occupancy;
private Integer readCount;
private Double potential; private Double potential;
private AddressableHeap queue; private AddressableHeap queue;
public Vertex(Integer vertexLabel) { public Vertex(SequenceRecord record, Integer vertexLabel) {
this.record = record;
this.vertexLabel = vertexLabel; this.vertexLabel = vertexLabel;
} }
public Vertex(String vertexLabel) {
this.vertexLabel = Integer.parseInt((vertexLabel));
}
public Vertex(SequenceType type, String sequence, Integer occupancy, Integer vertexLabel) { public SequenceRecord getRecord() { return record; }
this.type = type;
this.vertexLabel = vertexLabel;
this.sequence = sequence;
this.occupancy = occupancy;
this.readCount = 1;
}
public Vertex(SequenceType type, String sequence, Integer occupancy, Integer vertexLabel, Integer readCount) { public SequenceType getType() { return record.getSequenceType(); }
this.type = type;
this.vertexLabel = vertexLabel;
this.sequence = sequence;
this.occupancy = occupancy;
this.readCount = readCount;
}
public SequenceType getType() {
return type;
}
public void setType(String type) {
this.type = SequenceType.valueOf(type);
}
public Integer getVertexLabel() { public Integer getVertexLabel() {
return vertexLabel; return vertexLabel;
} }
public void setVertexLabel(String label) {
this.vertexLabel = Integer.parseInt(label);
}
public String getSequence() { public String getSequence() {
return sequence; return record.getSequence();
}
public void setSequence(String sequence) {
this.sequence = sequence;
} }
public Integer getOccupancy() { public Integer getOccupancy() {
return occupancy; return record.getOccupancy();
} }
public void setOccupancy(String occupancy) { public Integer getReadCount() { return record.getReadCount(); }
this.occupancy = Integer.parseInt(occupancy);
} public Integer getReadCount(Integer well) { return record.getReadCount(well); }
public Map<Integer, Integer> getWellOccupancies() { return record.getWellOccupancies(); }
@Override //adapted from JGraphT example code @Override //adapted from JGraphT example code
public int hashCode() public int hashCode()
{ {
return (sequence == null) ? 0 : sequence.hashCode(); return (this.getSequence() == null) ? 0 : this.getSequence().hashCode();
} }
@Override //adapted from JGraphT example code @Override //adapted from JGraphT example code
@@ -83,22 +52,21 @@ public class Vertex implements Serializable, Comparable<Vertex> {
if (getClass() != obj.getClass()) if (getClass() != obj.getClass())
return false; return false;
Vertex other = (Vertex) obj; Vertex other = (Vertex) obj;
if (sequence == null) { if (this.getSequence() == null) {
return other.sequence == null; return other.getSequence() == null;
} else { } else {
return sequence.equals(other.sequence); return this.getSequence().equals(other.getSequence());
} }
} }
@Override //adapted from JGraphT example code @Override //adapted from JGraphT example code
public String toString() public String toString()
{ {
StringBuilder sb = new StringBuilder(); StringBuilder sb = new StringBuilder();
sb.append("(").append(vertexLabel) sb.append("(").append(vertexLabel)
.append(", Type: ").append(type.name()) .append(", Type: ").append(this.getType().name())
.append(", Sequence: ").append(sequence) .append(", Sequence: ").append(this.getSequence())
.append(", Occupancy: ").append(occupancy).append(")"); .append(", Occupancy: ").append(this.getOccupancy()).append(")");
return sb.toString(); return sb.toString();
} }
@@ -106,12 +74,4 @@ public class Vertex implements Serializable, Comparable<Vertex> {
public int compareTo(Vertex other) { public int compareTo(Vertex other) {
return this.vertexLabel - other.getVertexLabel(); return this.vertexLabel - other.getVertexLabel();
} }
public Integer getReadCount() {
return readCount;
}
public void setReadCount(Integer readCount) {
this.readCount = readCount;
}
} }