Compare commits
54 Commits
d1a56c3578
...
v4.1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
756e5572b9 | ||
|
|
c30167d5ec | ||
|
|
a19525f5bb | ||
|
|
e5803defa3 | ||
|
|
34dc2a5721 | ||
|
|
fd106a0d73 | ||
|
|
22faad3414 | ||
|
|
0b36e2b742 | ||
|
|
9dacd8cd34 | ||
|
|
89687fa849 | ||
|
|
fb443fe958 | ||
|
|
adebe1542e | ||
|
|
882fbfffc6 | ||
|
|
a88cfb8b0d | ||
|
|
deed98e79d | ||
|
|
1a35600f50 | ||
|
|
856063529b | ||
|
|
b7c86f20b3 | ||
|
|
3a47efd361 | ||
|
|
58bb04c431 | ||
|
|
610da68262 | ||
|
|
9973473cc6 | ||
|
|
8781afd74c | ||
|
|
88b6c79caa | ||
|
|
35a519d499 | ||
|
|
5bd1e568a6 | ||
|
|
4ad1979c18 | ||
|
|
423c9d5c93 | ||
|
|
7c3c95ab4b | ||
|
|
d71a99555c | ||
|
|
2bf2a9f5f7 | ||
|
|
810abdb705 | ||
|
|
f7b3c133bf | ||
|
|
14fcfe1ff3 | ||
|
|
70fec95a00 | ||
|
|
077af3b46e | ||
|
|
db99c74810 | ||
|
|
13a1af1f71 | ||
|
|
199c81f983 | ||
|
|
19a2a35f07 | ||
|
|
36c628cde5 | ||
|
|
1ddac63b0a | ||
|
|
e795b4cdd0 | ||
|
|
60cf6775c2 | ||
|
|
8a8c89c9ba | ||
|
|
86371668d5 | ||
|
|
d81ab25a68 | ||
|
|
02c8e6aacb | ||
|
|
f84dfb2b4b | ||
|
|
184278b72e | ||
|
|
489369f533 | ||
|
|
fbee591273 | ||
|
|
603a999b59 | ||
|
|
c3df4b12ab |
37
readme.md
37
readme.md
@@ -96,7 +96,7 @@ These files are often generated in sequence. When entering filenames, it is not
|
|||||||
(.csv or .ser). When reading or writing files, the program will automatically add the correct extension to any filename
|
(.csv or .ser). When reading or writing files, the program will automatically add the correct extension to any filename
|
||||||
without one.
|
without one.
|
||||||
|
|
||||||
To save file I/O time, the most recent instance of each of these four
|
To save file I/O time when using the interactive interface, the most recent instance of each of these four
|
||||||
files either generated or read from disk can be cached in program memory. When caching is active, subsequent uses of the
|
files either generated or read from disk can be cached in program memory. When caching is active, subsequent uses of the
|
||||||
same data file won't need to be read in again until another file of that type is used or generated,
|
same data file won't need to be read in again until another file of that type is used or generated,
|
||||||
or caching is turned off for that file type. The program checks whether it needs to update its cached data by comparing
|
or caching is turned off for that file type. The program checks whether it needs to update its cached data by comparing
|
||||||
@@ -160,7 +160,7 @@ Options when making a Sample Plate file:
|
|||||||
* Number of sections on plate
|
* Number of sections on plate
|
||||||
* Number of T cells per well
|
* Number of T cells per well
|
||||||
* per section, if more than one section
|
* per section, if more than one section
|
||||||
* Dropout rate
|
* Sequence dropout rate
|
||||||
|
|
||||||
Files are in CSV format. There are no header labels. Every row represents a well.
|
Files are in CSV format. There are no header labels. Every row represents a well.
|
||||||
Every value represents an individual cell, containing four sequences, depicted as an array string:
|
Every value represents an individual cell, containing four sequences, depicted as an array string:
|
||||||
@@ -200,6 +200,11 @@ then use it for multiple different BiGpairSEQ simulations.
|
|||||||
Options for creating a Graph/Data file:
|
Options for creating a Graph/Data file:
|
||||||
* The Cell Sample file to use
|
* The Cell Sample file to use
|
||||||
* The Sample Plate file to use. (This must have been generated from the selected Cell Sample file.)
|
* The Sample Plate file to use. (This must have been generated from the selected Cell Sample file.)
|
||||||
|
* Whether to simulate sequence read depth. If simulated:
|
||||||
|
* The read depth (number of times each sequence is read)
|
||||||
|
* The read error rate (probability a sequence is misread)
|
||||||
|
* The error collision rate (probability two misreads produce the same spurious sequence)
|
||||||
|
* The real sequence collision rate (probability that a misread will produce a different, real sequence from the sample plate. Only applies to new misreads; once an error of this type has occurred, it's likelihood of ocurring again is dominated by the error collision probability.)
|
||||||
|
|
||||||
These files do not have a human-readable structure, and are not portable to other programs.
|
These files do not have a human-readable structure, and are not portable to other programs.
|
||||||
|
|
||||||
@@ -207,8 +212,8 @@ These files do not have a human-readable structure, and are not portable to othe
|
|||||||
|
|
||||||
For portability of graph data to other software, turn on [GraphML](http://graphml.graphdrawing.org/index.html) output
|
For portability of graph data to other software, turn on [GraphML](http://graphml.graphdrawing.org/index.html) output
|
||||||
in the Options menu in interactive mode, or use the `-graphml`command line argument. This will produce a .graphml file
|
in the Options menu in interactive mode, or use the `-graphml`command line argument. This will produce a .graphml file
|
||||||
for the weighted graph, with vertex attributes for sequence, type, and occupancy data. This graph contains all the data
|
for the weighted graph, with vertex attributes for sequence, type, total occupancy, total read count, and the read count for every individual occupied well.
|
||||||
necessary for the BiGpairSEQ matching algorithm. It does not include the data to measure pairing accuracy; for that,
|
This graph contains all the data necessary for the BiGpairSEQ matching algorithm. It does not include the data to measure pairing accuracy; for that,
|
||||||
compare the matching results to the original Cell Sample .csv file.
|
compare the matching results to the original Cell Sample .csv file.
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -265,7 +270,7 @@ P-values are calculated *after* BiGpairSEQ matching is completed, for purposes o
|
|||||||
using the (2021 corrected) formula from the original pairSEQ paper. (Howie, et al. 2015)
|
using the (2021 corrected) formula from the original pairSEQ paper. (Howie, et al. 2015)
|
||||||
|
|
||||||
|
|
||||||
## PERFORMANCE
|
## PERFORMANCE (old results; need updating to reflect current, improved simulator performance)
|
||||||
|
|
||||||
On a home computer with a Ryzen 5600X CPU, 64GB of 3200MHz DDR4 RAM (half of which was allocated to the Java Virtual Machine), and a PCIe 3.0 SSD, running Linux Mint 20.3 Edge (5.13 kernel),
|
On a home computer with a Ryzen 5600X CPU, 64GB of 3200MHz DDR4 RAM (half of which was allocated to the Java Virtual Machine), and a PCIe 3.0 SSD, running Linux Mint 20.3 Edge (5.13 kernel),
|
||||||
the author ran a BiGpairSEQ simulation of a 96-well sample plate with 30,000 T cells/well comprising ~11,800 alphas and betas,
|
the author ran a BiGpairSEQ simulation of a 96-well sample plate with 30,000 T cells/well comprising ~11,800 alphas and betas,
|
||||||
@@ -340,23 +345,35 @@ roughly as though it had a constant well population equal to the plate's average
|
|||||||
* ~~Add controllable heap-type parameter?~~
|
* ~~Add controllable heap-type parameter?~~
|
||||||
* Parameter implemented. Fibonacci heap the current default.
|
* Parameter implemented. Fibonacci heap the current default.
|
||||||
* ~~Implement sample plates with random numbers of T cells per well.~~ DONE
|
* ~~Implement sample plates with random numbers of T cells per well.~~ DONE
|
||||||
* Possible BiGpairSEQ advantage over pairSEQ: BiGpairSEQ is resilient to variations in well population sizes on a sample plate; pairSEQ is not.
|
* Possible BiGpairSEQ advantage over pairSEQ: BiGpairSEQ is resilient to variations in well population sizes on a sample plate; pairSEQ is not due to nature of probability calculations.
|
||||||
* preliminary data suggests that BiGpairSEQ behaves roughly as though the whole plate had whatever the *average* well concentration is, but that's still speculative.
|
* preliminary data suggests that BiGpairSEQ behaves roughly as though the whole plate had whatever the *average* well concentration is, but that's still speculative.
|
||||||
* See if there's a reasonable way to reformat Sample Plate files so that wells are columns instead of rows.
|
* ~~See if there's a reasonable way to reformat Sample Plate files so that wells are columns instead of rows.~~
|
||||||
* ~~Problem is variable number of cells in a well~~
|
* ~~Problem is variable number of cells in a well~~
|
||||||
* ~~Apache Commons CSV library writes entries a row at a time~~
|
* ~~Apache Commons CSV library writes entries a row at a time~~
|
||||||
* _Got this working, but at the cost of a profoundly strange bug in graph occupancy filtering. Have reverted the repo until I can figure out what caused that. Given how easily Thingiverse transposes CSV matrices in R, might not even be worth fixing.
|
* Got this working, but at the cost of a profoundly strange bug in graph occupancy filtering. Have reverted the repo until I can figure out what caused that. Given how easily Thingiverse transposes CSV matrices in R, might not even be worth fixing.
|
||||||
* ~~Enable GraphML output in addition to serialized object binaries, for data portability~~ DONE
|
* ~~Enable GraphML output in addition to serialized object binaries, for data portability~~ DONE
|
||||||
* ~~Custom vertex type with attribute for sequence occupancy?~~ DONE
|
|
||||||
* Advantage: would eliminate the need to use maps to associate vertices with sequences, which would make the code easier to understand.
|
|
||||||
* ~~Have a branch where this is implemented, but there's a bug that broke matching. Don't currently have time to fix.~~
|
* ~~Have a branch where this is implemented, but there's a bug that broke matching. Don't currently have time to fix.~~
|
||||||
* ~~Re-implement command line arguments, to enable scripting and statistical simulation studies~~ DONE
|
* ~~Re-implement command line arguments, to enable scripting and statistical simulation studies~~ DONE
|
||||||
* ~~Implement custom Vertex class to simplify code and make it easier to implement different MWM algorithms~~ DONE
|
* ~~Implement custom Vertex class to simplify code and make it easier to implement different MWM algorithms~~ DONE
|
||||||
|
* Advantage: would eliminate the need to use maps to associate vertices with sequences, which would make the code easier to understand.
|
||||||
* This also seems to be faster when using the same algorithm than the version with lots of maps, which is a nice bonus!
|
* This also seems to be faster when using the same algorithm than the version with lots of maps, which is a nice bonus!
|
||||||
|
* ~~Implement simulation of read depth, and of read errors. Pre-filter graph for difference in read count to eliminate spurious sequences.~~ DONE
|
||||||
|
* Pre-filtering based on comparing (read depth) * (occupancy) to (read count) for each sequence works extremely well
|
||||||
|
* ~~Add read depth simulation options to CLI~~ DONE
|
||||||
|
* ~~Update graphml output to reflect current Vertex class attributes~~ DONE
|
||||||
|
* Individual well data from the SequenceRecords could be included, if there's ever a reason for it
|
||||||
|
* ~~Implement simulation of sequences being misread as other real sequence~~ DONE
|
||||||
|
* Update matching metadata output options in CLI
|
||||||
|
* Update performance data in this readme
|
||||||
|
* Add section to ReadMe describing data filtering methods.
|
||||||
* Re-implement CDR1 matching method
|
* Re-implement CDR1 matching method
|
||||||
|
* Refactor simulator code to collect all needed data in a single scan of the plate
|
||||||
|
* Currently it scans once for the vertices and then again for the edge weights. This made simulating read depth awkward, and incompatible with caching of plate files.
|
||||||
|
* This would be a fairly major rewrite of the simulator code, but could make things faster, and would definitely make them cleaner.
|
||||||
* Implement Duan and Su's maximum weight matching algorithm
|
* Implement Duan and Su's maximum weight matching algorithm
|
||||||
* Add controllable algorithm-type parameter?
|
* Add controllable algorithm-type parameter?
|
||||||
* This would be fun and valuable, but probably take more time than I have for a hobby project.
|
* This would be fun and valuable, but probably take more time than I have for a hobby project.
|
||||||
|
* Implement an auction algorithm for maximum weight matching
|
||||||
* Implement an algorithm for approximating a maximum weight matching
|
* Implement an algorithm for approximating a maximum weight matching
|
||||||
* Some of these run in linear or near-linear time
|
* Some of these run in linear or near-linear time
|
||||||
* given that the underlying biological samples have many, many sources of error, this would probably be the most useful option in practice. It seems less mathematically elegant, though, and so less fun for me.
|
* given that the underlying biological samples have many, many sources of error, this would probably be the most useful option in practice. It seems less mathematically elegant, though, and so less fun for me.
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ import java.util.List;
|
|||||||
public class CellFileReader {
|
public class CellFileReader {
|
||||||
|
|
||||||
private String filename;
|
private String filename;
|
||||||
private List<Integer[]> distinctCells = new ArrayList<>();
|
private List<String[]> distinctCells = new ArrayList<>();
|
||||||
private Integer cdr1Freq;
|
private Integer cdr1Freq;
|
||||||
|
|
||||||
public CellFileReader(String filename) {
|
public CellFileReader(String filename) {
|
||||||
@@ -32,11 +32,11 @@ public class CellFileReader {
|
|||||||
CSVParser parser = new CSVParser(reader, cellFileFormat);
|
CSVParser parser = new CSVParser(reader, cellFileFormat);
|
||||||
){
|
){
|
||||||
for(CSVRecord record: parser.getRecords()) {
|
for(CSVRecord record: parser.getRecords()) {
|
||||||
Integer[] cell = new Integer[4];
|
String[] cell = new String[4];
|
||||||
cell[0] = Integer.valueOf(record.get("Alpha CDR3"));
|
cell[0] = record.get("Alpha CDR3");
|
||||||
cell[1] = Integer.valueOf(record.get("Beta CDR3"));
|
cell[1] = record.get("Beta CDR3");
|
||||||
cell[2] = Integer.valueOf(record.get("Alpha CDR1"));
|
cell[2] = record.get("Alpha CDR1");
|
||||||
cell[3] = Integer.valueOf(record.get("Beta CDR1"));
|
cell[3] = record.get("Beta CDR1");
|
||||||
distinctCells.add(cell);
|
distinctCells.add(cell);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -47,8 +47,8 @@ public class CellFileReader {
|
|||||||
}
|
}
|
||||||
|
|
||||||
//get CDR1 frequency
|
//get CDR1 frequency
|
||||||
ArrayList<Integer> cdr1Alphas = new ArrayList<>();
|
ArrayList<String> cdr1Alphas = new ArrayList<>();
|
||||||
for (Integer[] cell : distinctCells) {
|
for (String[] cell : distinctCells) {
|
||||||
cdr1Alphas.add(cell[3]);
|
cdr1Alphas.add(cell[3]);
|
||||||
}
|
}
|
||||||
double count = cdr1Alphas.stream().distinct().count();
|
double count = cdr1Alphas.stream().distinct().count();
|
||||||
@@ -62,14 +62,4 @@ public class CellFileReader {
|
|||||||
}
|
}
|
||||||
|
|
||||||
public String getFilename() { return filename;}
|
public String getFilename() { return filename;}
|
||||||
|
|
||||||
//Refactor everything that uses this to have access to a Cell Sample and get the cells there instead.
|
|
||||||
public List<Integer[]> getListOfDistinctCellsDEPRECATED(){
|
|
||||||
return distinctCells;
|
|
||||||
}
|
|
||||||
|
|
||||||
public Integer getCellCountDEPRECATED() {
|
|
||||||
//Refactor everything that uses this to have access to a Cell Sample and get the count there instead.
|
|
||||||
return distinctCells.size();
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ import java.util.List;
|
|||||||
public class CellFileWriter {
|
public class CellFileWriter {
|
||||||
|
|
||||||
private String[] headers = {"Alpha CDR3", "Beta CDR3", "Alpha CDR1", "Beta CDR1"};
|
private String[] headers = {"Alpha CDR3", "Beta CDR3", "Alpha CDR1", "Beta CDR1"};
|
||||||
List<Integer[]> cells;
|
List<String[]> cells;
|
||||||
String filename;
|
String filename;
|
||||||
Integer cdr1Freq;
|
Integer cdr1Freq;
|
||||||
|
|
||||||
@@ -35,7 +35,7 @@ public class CellFileWriter {
|
|||||||
printer.printComment("Sample contains 1 unique CDR1 for every " + cdr1Freq + "unique CDR3s.");
|
printer.printComment("Sample contains 1 unique CDR1 for every " + cdr1Freq + "unique CDR3s.");
|
||||||
printer.printRecords(cells);
|
printer.printRecords(cells);
|
||||||
} catch(IOException ex){
|
} catch(IOException ex){
|
||||||
System.out.println("Could not make new file named "+filename);
|
System.out.println("Could not make new file named " + filename);
|
||||||
System.err.println(ex);
|
System.err.println(ex);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ import java.util.stream.IntStream;
|
|||||||
|
|
||||||
public class CellSample {
|
public class CellSample {
|
||||||
|
|
||||||
private List<Integer[]> cells;
|
private List<String[]> cells;
|
||||||
private Integer cdr1Freq;
|
private Integer cdr1Freq;
|
||||||
|
|
||||||
public CellSample(Integer numDistinctCells, Integer cdr1Freq){
|
public CellSample(Integer numDistinctCells, Integer cdr1Freq){
|
||||||
@@ -24,28 +24,28 @@ public class CellSample {
|
|||||||
|
|
||||||
//Each cell represented by 4 values
|
//Each cell represented by 4 values
|
||||||
//two CDR3s, and two CDR1s. First two values are CDR3s (alpha, beta), second two are CDR1s (alpha, beta)
|
//two CDR3s, and two CDR1s. First two values are CDR3s (alpha, beta), second two are CDR1s (alpha, beta)
|
||||||
List<Integer[]> distinctCells = new ArrayList<>();
|
List<String[]> distinctCells = new ArrayList<>();
|
||||||
for(int i = 0; i < numbersCDR3.size() - 1; i = i + 2){
|
for(int i = 0; i < numbersCDR3.size() - 1; i = i + 2){
|
||||||
//Go through entire CDR3 list once, make pairs of alphas and betas
|
//Go through entire CDR3 list once, make pairs of alphas and betas
|
||||||
Integer tmpCDR3a = numbersCDR3.get(i);
|
String tmpCDR3a = numbersCDR3.get(i).toString();
|
||||||
Integer tmpCDR3b = numbersCDR3.get(i+1);
|
String tmpCDR3b = numbersCDR3.get(i+1).toString();
|
||||||
//Go through (likely shorter) CDR1 list as many times as necessary, make pairs of alphas and betas
|
//Go through the (likely shorter) CDR1 list as many times as necessary, make pairs of alphas and betas
|
||||||
Integer tmpCDR1a = numbersCDR1.get(i % numbersCDR1.size());
|
String tmpCDR1a = numbersCDR1.get(i % numbersCDR1.size()).toString();
|
||||||
Integer tmpCDR1b = numbersCDR1.get((i+1) % numbersCDR1.size());
|
String tmpCDR1b = numbersCDR1.get((i+1) % numbersCDR1.size()).toString();
|
||||||
//Make the array representing the cell
|
//Make the array representing the cell
|
||||||
Integer[] tmp = {tmpCDR3a, tmpCDR3b, tmpCDR1a, tmpCDR1b};
|
String[] tmp = {tmpCDR3a, tmpCDR3b, tmpCDR1a, tmpCDR1b};
|
||||||
//Add the cell to the list of distinct cells
|
//Add the cell to the list of distinct cells
|
||||||
distinctCells.add(tmp);
|
distinctCells.add(tmp);
|
||||||
}
|
}
|
||||||
this.cells = distinctCells;
|
this.cells = distinctCells;
|
||||||
}
|
}
|
||||||
|
|
||||||
public CellSample(List<Integer[]> cells, Integer cdr1Freq){
|
public CellSample(List<String[]> cells, Integer cdr1Freq){
|
||||||
this.cells = cells;
|
this.cells = cells;
|
||||||
this.cdr1Freq = cdr1Freq;
|
this.cdr1Freq = cdr1Freq;
|
||||||
}
|
}
|
||||||
|
|
||||||
public List<Integer[]> getCells(){
|
public List<String[]> getCells(){
|
||||||
return cells;
|
return cells;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -35,6 +35,10 @@ import java.util.stream.Stream;
|
|||||||
* output : name of the output file
|
* output : name of the output file
|
||||||
* graphml : output a graphml file
|
* graphml : output a graphml file
|
||||||
* binary : output a serialized binary object file
|
* binary : output a serialized binary object file
|
||||||
|
* IF SIMULATING READ DEPTH, ALL THESE ARE REQUIRED. Absence indicates not simulating read depth
|
||||||
|
* readdepth: number of reads per sequence
|
||||||
|
* readerrorprob: probability of reading a sequence incorrectly
|
||||||
|
* errcollisionprob: probability of two read errors being identical
|
||||||
*
|
*
|
||||||
* Match flags:
|
* Match flags:
|
||||||
* graphFile : name of graph and data file to use as input
|
* graphFile : name of graph and data file to use as input
|
||||||
@@ -142,7 +146,25 @@ public class CommandLineInterface {
|
|||||||
CellSample cells = getCells(cellFilename);
|
CellSample cells = getCells(cellFilename);
|
||||||
//get plate
|
//get plate
|
||||||
Plate plate = getPlate(plateFilename);
|
Plate plate = getPlate(plateFilename);
|
||||||
GraphWithMapData graph = Simulator.makeGraph(cells, plate, false);
|
GraphWithMapData graph;
|
||||||
|
Integer readDepth = 1;
|
||||||
|
Double readErrorRate = 0.0;
|
||||||
|
Double errorCollisionRate = 0.0;
|
||||||
|
Double realSequenceCollisionRate = 0.0;
|
||||||
|
if (line.hasOption("rd")) {
|
||||||
|
readDepth = Integer.parseInt(line.getOptionValue("rd"));
|
||||||
|
}
|
||||||
|
if (line.hasOption("err")) {
|
||||||
|
readErrorRate = Double.parseDouble(line.getOptionValue("err"));
|
||||||
|
}
|
||||||
|
if (line.hasOption("errcoll")) {
|
||||||
|
errorCollisionRate = Double.parseDouble(line.getOptionValue("errcoll"));
|
||||||
|
}
|
||||||
|
if (line.hasOption("realcoll")) {
|
||||||
|
realSequenceCollisionRate = Double.parseDouble(line.getOptionValue("realcoll"));
|
||||||
|
}
|
||||||
|
graph = Simulator.makeCDR3Graph(cells, plate, readDepth, readErrorRate, errorCollisionRate,
|
||||||
|
realSequenceCollisionRate, false);
|
||||||
if (!line.hasOption("no-binary")) { //output binary file unless told not to
|
if (!line.hasOption("no-binary")) { //output binary file unless told not to
|
||||||
GraphDataObjectWriter writer = new GraphDataObjectWriter(outputFilename, graph, false);
|
GraphDataObjectWriter writer = new GraphDataObjectWriter(outputFilename, graph, false);
|
||||||
writer.writeDataToFile();
|
writer.writeDataToFile();
|
||||||
@@ -384,11 +406,41 @@ public class CommandLineInterface {
|
|||||||
.longOpt("no-binary")
|
.longOpt("no-binary")
|
||||||
.desc("(Optional) Don't output serialized binary file")
|
.desc("(Optional) Don't output serialized binary file")
|
||||||
.build();
|
.build();
|
||||||
|
Option readDepth = Option.builder("rd")
|
||||||
|
.longOpt("read-depth")
|
||||||
|
.desc("(Optional) The number of times to read each sequence.")
|
||||||
|
.hasArg()
|
||||||
|
.argName("depth")
|
||||||
|
.build();
|
||||||
|
Option readErrorProb = Option.builder("err")
|
||||||
|
.longOpt("read-error-prob")
|
||||||
|
.desc("(Optional) The probability that a sequence will be misread. (0.0 - 1.0)")
|
||||||
|
.hasArg()
|
||||||
|
.argName("prob")
|
||||||
|
.build();
|
||||||
|
Option errorCollisionProb = Option.builder("errcoll")
|
||||||
|
.longOpt("error-collision-prob")
|
||||||
|
.desc("(Optional) The probability that two misreads will produce the same spurious sequence. (0.0 - 1.0)")
|
||||||
|
.hasArg()
|
||||||
|
.argName("prob")
|
||||||
|
.build();
|
||||||
|
Option realSequenceCollisionProb = Option.builder("realcoll")
|
||||||
|
.longOpt("real-collision-prob")
|
||||||
|
.desc("(Optional) The probability that a sequence will be misread " +
|
||||||
|
"as another real sequence. (Only applies to unique misreads; after this has happened once, " +
|
||||||
|
"future error collisions could produce the real sequence again) (0.0 - 1.0)")
|
||||||
|
.hasArg()
|
||||||
|
.argName("prob")
|
||||||
|
.build();
|
||||||
graphOptions.addOption(cellFilename);
|
graphOptions.addOption(cellFilename);
|
||||||
graphOptions.addOption(plateFilename);
|
graphOptions.addOption(plateFilename);
|
||||||
graphOptions.addOption(outputFileOption());
|
graphOptions.addOption(outputFileOption());
|
||||||
graphOptions.addOption(outputGraphML);
|
graphOptions.addOption(outputGraphML);
|
||||||
graphOptions.addOption(outputSerializedBinary);
|
graphOptions.addOption(outputSerializedBinary);
|
||||||
|
graphOptions.addOption(readDepth);
|
||||||
|
graphOptions.addOption(readErrorProb);
|
||||||
|
graphOptions.addOption(errorCollisionProb);
|
||||||
|
graphOptions.addOption(realSequenceCollisionProb);
|
||||||
return graphOptions;
|
return graphOptions;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -5,7 +5,6 @@ import org.jgrapht.nio.AttributeType;
|
|||||||
import org.jgrapht.nio.DefaultAttribute;
|
import org.jgrapht.nio.DefaultAttribute;
|
||||||
import org.jgrapht.nio.graphml.GraphMLExporter;
|
import org.jgrapht.nio.graphml.GraphMLExporter;
|
||||||
import org.jgrapht.nio.graphml.GraphMLExporter.AttributeCategory;
|
import org.jgrapht.nio.graphml.GraphMLExporter.AttributeCategory;
|
||||||
import org.w3c.dom.Attr;
|
|
||||||
|
|
||||||
import java.io.BufferedWriter;
|
import java.io.BufferedWriter;
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
@@ -13,6 +12,7 @@ import java.nio.file.Files;
|
|||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
import java.nio.file.StandardOpenOption;
|
import java.nio.file.StandardOpenOption;
|
||||||
import java.util.HashMap;
|
import java.util.HashMap;
|
||||||
|
import java.util.Iterator;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
|
|
||||||
public class GraphMLFileWriter {
|
public class GraphMLFileWriter {
|
||||||
@@ -41,11 +41,11 @@ public class GraphMLFileWriter {
|
|||||||
}
|
}
|
||||||
|
|
||||||
private Map<String, Attribute> createGraphAttributes(){
|
private Map<String, Attribute> createGraphAttributes(){
|
||||||
Map<String, Attribute> ga = new HashMap<>();
|
Map<String, Attribute> attributes = new HashMap<>();
|
||||||
//Sample plate filename
|
//Sample plate filename
|
||||||
ga.put("sample plate filename", DefaultAttribute.createAttribute(data.getSourceFilename()));
|
attributes.put("sample plate filename", DefaultAttribute.createAttribute(data.getSourceFilename()));
|
||||||
// Number of wells
|
// Number of wells
|
||||||
ga.put("well count", DefaultAttribute.createAttribute(data.getNumWells().toString()));
|
attributes.put("well count", DefaultAttribute.createAttribute(data.getNumWells().toString()));
|
||||||
//Well populations
|
//Well populations
|
||||||
Integer[] wellPopulations = data.getWellPopulations();
|
Integer[] wellPopulations = data.getWellPopulations();
|
||||||
StringBuilder populationsStringBuilder = new StringBuilder();
|
StringBuilder populationsStringBuilder = new StringBuilder();
|
||||||
@@ -55,8 +55,37 @@ public class GraphMLFileWriter {
|
|||||||
populationsStringBuilder.append(wellPopulations[i].toString());
|
populationsStringBuilder.append(wellPopulations[i].toString());
|
||||||
}
|
}
|
||||||
String wellPopulationsString = populationsStringBuilder.toString();
|
String wellPopulationsString = populationsStringBuilder.toString();
|
||||||
ga.put("well populations", DefaultAttribute.createAttribute(wellPopulationsString));
|
attributes.put("well populations", DefaultAttribute.createAttribute(wellPopulationsString));
|
||||||
return ga;
|
attributes.put("read depth", DefaultAttribute.createAttribute(data.getReadDepth().toString()));
|
||||||
|
attributes.put("read error rate", DefaultAttribute.createAttribute(data.getReadErrorRate().toString()));
|
||||||
|
attributes.put("error collision rate", DefaultAttribute.createAttribute(data.getErrorCollisionRate().toString()));
|
||||||
|
attributes.put("real sequence collision rate", DefaultAttribute.createAttribute(data.getRealSequenceCollisionRate()));
|
||||||
|
return attributes;
|
||||||
|
}
|
||||||
|
|
||||||
|
private Map<String, Attribute> createVertexAttributes(Vertex v){
|
||||||
|
Map<String, Attribute> attributes = new HashMap<>();
|
||||||
|
//sequence type
|
||||||
|
attributes.put("type", DefaultAttribute.createAttribute(v.getType().name()));
|
||||||
|
//sequence
|
||||||
|
attributes.put("sequence", DefaultAttribute.createAttribute(v.getSequence()));
|
||||||
|
//number of wells the sequence appears in
|
||||||
|
attributes.put("occupancy", DefaultAttribute.createAttribute(v.getOccupancy()));
|
||||||
|
//total number of times the sequence was read
|
||||||
|
attributes.put("total read count", DefaultAttribute.createAttribute(v.getReadCount()));
|
||||||
|
StringBuilder wellsAndReadCountsBuilder = new StringBuilder();
|
||||||
|
Iterator<Map.Entry<Integer, Integer>> wellOccupancies = v.getWellOccupancies().entrySet().iterator();
|
||||||
|
while (wellOccupancies.hasNext()) {
|
||||||
|
Map.Entry<Integer, Integer> entry = wellOccupancies.next();
|
||||||
|
wellsAndReadCountsBuilder.append(entry.getKey() + ":" + entry.getValue());
|
||||||
|
if (wellOccupancies.hasNext()) {
|
||||||
|
wellsAndReadCountsBuilder.append(", ");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
String wellsAndReadCounts = wellsAndReadCountsBuilder.toString();
|
||||||
|
//the wells the sequence appears in and the read counts in those wells
|
||||||
|
attributes.put("wells:read counts", DefaultAttribute.createAttribute(wellsAndReadCounts));
|
||||||
|
return attributes;
|
||||||
}
|
}
|
||||||
|
|
||||||
public void writeGraphToFile() {
|
public void writeGraphToFile() {
|
||||||
@@ -69,13 +98,7 @@ public class GraphMLFileWriter {
|
|||||||
//Set graph attributes
|
//Set graph attributes
|
||||||
exporter.setGraphAttributeProvider( () -> graphAttributes);
|
exporter.setGraphAttributeProvider( () -> graphAttributes);
|
||||||
//set type, sequence, and occupancy attributes for each vertex
|
//set type, sequence, and occupancy attributes for each vertex
|
||||||
exporter.setVertexAttributeProvider( v -> {
|
exporter.setVertexAttributeProvider(this::createVertexAttributes);
|
||||||
Map<String, Attribute> attributes = new HashMap<>();
|
|
||||||
attributes.put("type", DefaultAttribute.createAttribute(v.getType().name()));
|
|
||||||
attributes.put("sequence", DefaultAttribute.createAttribute(v.getSequence()));
|
|
||||||
attributes.put("occupancy", DefaultAttribute.createAttribute(v.getOccupancy()));
|
|
||||||
return attributes;
|
|
||||||
});
|
|
||||||
//register the attributes
|
//register the attributes
|
||||||
for(String s : graphAttributes.keySet()) {
|
for(String s : graphAttributes.keySet()) {
|
||||||
exporter.registerAttribute(s, AttributeCategory.GRAPH, AttributeType.STRING);
|
exporter.registerAttribute(s, AttributeCategory.GRAPH, AttributeType.STRING);
|
||||||
@@ -83,6 +106,8 @@ public class GraphMLFileWriter {
|
|||||||
exporter.registerAttribute("type", AttributeCategory.NODE, AttributeType.STRING);
|
exporter.registerAttribute("type", AttributeCategory.NODE, AttributeType.STRING);
|
||||||
exporter.registerAttribute("sequence", AttributeCategory.NODE, AttributeType.STRING);
|
exporter.registerAttribute("sequence", AttributeCategory.NODE, AttributeType.STRING);
|
||||||
exporter.registerAttribute("occupancy", AttributeCategory.NODE, AttributeType.STRING);
|
exporter.registerAttribute("occupancy", AttributeCategory.NODE, AttributeType.STRING);
|
||||||
|
exporter.registerAttribute("total read count", AttributeCategory.NODE, AttributeType.STRING);
|
||||||
|
exporter.registerAttribute("wells:read counts", AttributeCategory.NODE, AttributeType.STRING);
|
||||||
//export the graph
|
//export the graph
|
||||||
exporter.exportGraph(graph, writer);
|
exporter.exportGraph(graph, writer);
|
||||||
} catch(IOException ex){
|
} catch(IOException ex){
|
||||||
|
|||||||
@@ -12,7 +12,6 @@ public interface GraphModificationFunctions {
|
|||||||
static Map<Vertex[], Integer> filterByOverlapThresholds(SimpleWeightedGraph<Vertex, DefaultWeightedEdge> graph,
|
static Map<Vertex[], Integer> filterByOverlapThresholds(SimpleWeightedGraph<Vertex, DefaultWeightedEdge> graph,
|
||||||
int low, int high, boolean saveEdges) {
|
int low, int high, boolean saveEdges) {
|
||||||
Map<Vertex[], Integer> removedEdges = new HashMap<>();
|
Map<Vertex[], Integer> removedEdges = new HashMap<>();
|
||||||
//List<Integer[]> removedEdges = new ArrayList<>();
|
|
||||||
for (DefaultWeightedEdge e : graph.edgeSet()) {
|
for (DefaultWeightedEdge e : graph.edgeSet()) {
|
||||||
if ((graph.getEdgeWeight(e) > high) || (graph.getEdgeWeight(e) < low)) {
|
if ((graph.getEdgeWeight(e) > high) || (graph.getEdgeWeight(e) < low)) {
|
||||||
if(saveEdges) {
|
if(saveEdges) {
|
||||||
@@ -94,6 +93,38 @@ public interface GraphModificationFunctions {
|
|||||||
return removedEdges;
|
return removedEdges;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static Map<Vertex[], Integer> filterByRelativeReadCount (SimpleWeightedGraph<Vertex, DefaultWeightedEdge> graph, Integer threshold, boolean saveEdges) {
|
||||||
|
Map<Vertex[], Integer> removedEdges = new HashMap<>();
|
||||||
|
Boolean passes;
|
||||||
|
for (DefaultWeightedEdge e : graph.edgeSet()) {
|
||||||
|
Integer alphaReadCount = graph.getEdgeSource(e).getReadCount();
|
||||||
|
Integer betaReadCount = graph.getEdgeTarget(e).getReadCount();
|
||||||
|
passes = RelativeReadCountFilterFunction(threshold, alphaReadCount, betaReadCount);
|
||||||
|
if (!passes) {
|
||||||
|
if (saveEdges) {
|
||||||
|
Vertex source = graph.getEdgeSource(e);
|
||||||
|
Vertex target = graph.getEdgeTarget(e);
|
||||||
|
Integer intWeight = (int) graph.getEdgeWeight(e);
|
||||||
|
Vertex[] edge = {source, target};
|
||||||
|
removedEdges.put(edge, intWeight);
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
graph.setEdgeWeight(e, 0.0);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if(saveEdges) {
|
||||||
|
for (Vertex[] edge : removedEdges.keySet()) {
|
||||||
|
graph.removeEdge(edge[0], edge[1]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return removedEdges;
|
||||||
|
}
|
||||||
|
|
||||||
|
static Boolean RelativeReadCountFilterFunction(Integer threshold, Integer alphaReadCount, Integer betaReadCount) {
|
||||||
|
return Math.abs(alphaReadCount - betaReadCount) < threshold;
|
||||||
|
}
|
||||||
|
|
||||||
static void addRemovedEdges(SimpleWeightedGraph<Vertex, DefaultWeightedEdge> graph,
|
static void addRemovedEdges(SimpleWeightedGraph<Vertex, DefaultWeightedEdge> graph,
|
||||||
Map<Vertex[], Integer> removedEdges) {
|
Map<Vertex[], Integer> removedEdges) {
|
||||||
for (Vertex[] edge : removedEdges.keySet()) {
|
for (Vertex[] edge : removedEdges.keySet()) {
|
||||||
@@ -102,4 +133,6 @@ public interface GraphModificationFunctions {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -11,11 +11,15 @@ public class GraphWithMapData implements java.io.Serializable {
|
|||||||
|
|
||||||
private String sourceFilename;
|
private String sourceFilename;
|
||||||
private final SimpleWeightedGraph graph;
|
private final SimpleWeightedGraph graph;
|
||||||
private Integer numWells;
|
private final int numWells;
|
||||||
private Integer[] wellPopulations;
|
private final Integer[] wellPopulations;
|
||||||
private Integer alphaCount;
|
private final int alphaCount;
|
||||||
private Integer betaCount;
|
private final int betaCount;
|
||||||
private final Map<Integer, Integer> distCellsMapAlphaKey;
|
private final int readDepth;
|
||||||
|
private final double readErrorRate;
|
||||||
|
private final double errorCollisionRate;
|
||||||
|
private final double realSequenceCollisionRate;
|
||||||
|
private final Map<String, String> distCellsMapAlphaKey;
|
||||||
// private final Map<Integer, Integer> plateVtoAMap;
|
// private final Map<Integer, Integer> plateVtoAMap;
|
||||||
// private final Map<Integer, Integer> plateVtoBMap;
|
// private final Map<Integer, Integer> plateVtoBMap;
|
||||||
// private final Map<Integer, Integer> plateAtoVMap;
|
// private final Map<Integer, Integer> plateAtoVMap;
|
||||||
@@ -25,7 +29,9 @@ public class GraphWithMapData implements java.io.Serializable {
|
|||||||
private final Duration time;
|
private final Duration time;
|
||||||
|
|
||||||
public GraphWithMapData(SimpleWeightedGraph graph, Integer numWells, Integer[] wellConcentrations,
|
public GraphWithMapData(SimpleWeightedGraph graph, Integer numWells, Integer[] wellConcentrations,
|
||||||
Map<Integer, Integer> distCellsMapAlphaKey, Integer alphaCount, Integer betaCount, Duration time){
|
Map<String, String> distCellsMapAlphaKey, Integer alphaCount, Integer betaCount,
|
||||||
|
Integer readDepth, Double readErrorRate, Double errorCollisionRate,
|
||||||
|
Double realSequenceCollisionRate, Duration time){
|
||||||
|
|
||||||
// Map<Integer, Integer> plateVtoAMap,
|
// Map<Integer, Integer> plateVtoAMap,
|
||||||
// Map<Integer,Integer> plateVtoBMap, Map<Integer, Integer> plateAtoVMap,
|
// Map<Integer,Integer> plateVtoBMap, Map<Integer, Integer> plateAtoVMap,
|
||||||
@@ -43,6 +49,10 @@ public class GraphWithMapData implements java.io.Serializable {
|
|||||||
// this.plateBtoVMap = plateBtoVMap;
|
// this.plateBtoVMap = plateBtoVMap;
|
||||||
// this.alphaWellCounts = alphaWellCounts;
|
// this.alphaWellCounts = alphaWellCounts;
|
||||||
// this.betaWellCounts = betaWellCounts;
|
// this.betaWellCounts = betaWellCounts;
|
||||||
|
this.readDepth = readDepth;
|
||||||
|
this.readErrorRate = readErrorRate;
|
||||||
|
this.errorCollisionRate = errorCollisionRate;
|
||||||
|
this.realSequenceCollisionRate = realSequenceCollisionRate;
|
||||||
this.time = time;
|
this.time = time;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -66,7 +76,7 @@ public class GraphWithMapData implements java.io.Serializable {
|
|||||||
return betaCount;
|
return betaCount;
|
||||||
}
|
}
|
||||||
|
|
||||||
public Map<Integer, Integer> getDistCellsMapAlphaKey() {
|
public Map<String, String> getDistCellsMapAlphaKey() {
|
||||||
return distCellsMapAlphaKey;
|
return distCellsMapAlphaKey;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -94,6 +104,8 @@ public class GraphWithMapData implements java.io.Serializable {
|
|||||||
// return betaWellCounts;
|
// return betaWellCounts;
|
||||||
// }
|
// }
|
||||||
|
|
||||||
|
public Integer getReadDepth() { return readDepth; }
|
||||||
|
|
||||||
public Duration getTime() {
|
public Duration getTime() {
|
||||||
return time;
|
return time;
|
||||||
}
|
}
|
||||||
@@ -105,4 +117,14 @@ public class GraphWithMapData implements java.io.Serializable {
|
|||||||
public String getSourceFilename() {
|
public String getSourceFilename() {
|
||||||
return sourceFilename;
|
return sourceFilename;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public Double getReadErrorRate() {
|
||||||
|
return readErrorRate;
|
||||||
|
}
|
||||||
|
|
||||||
|
public Double getErrorCollisionRate() {
|
||||||
|
return errorCollisionRate;
|
||||||
|
}
|
||||||
|
|
||||||
|
public Double getRealSequenceCollisionRate() { return realSequenceCollisionRate; }
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -250,6 +250,12 @@ public class InteractiveInterface {
|
|||||||
String filename = null;
|
String filename = null;
|
||||||
String cellFile = null;
|
String cellFile = null;
|
||||||
String plateFile = null;
|
String plateFile = null;
|
||||||
|
Boolean simulateReadDepth = false;
|
||||||
|
//number of times to read each sequence in a well
|
||||||
|
int readDepth = 1;
|
||||||
|
double readErrorRate = 0.0;
|
||||||
|
double errorCollisionRate = 0.0;
|
||||||
|
double realSequenceCollisionRate = 0.0;
|
||||||
try {
|
try {
|
||||||
String str = "\nGenerating bipartite weighted graph encoding occupancy overlap data ";
|
String str = "\nGenerating bipartite weighted graph encoding occupancy overlap data ";
|
||||||
str = str.concat("\nrequires a cell sample file and a sample plate file.");
|
str = str.concat("\nrequires a cell sample file and a sample plate file.");
|
||||||
@@ -258,6 +264,38 @@ public class InteractiveInterface {
|
|||||||
cellFile = sc.next();
|
cellFile = sc.next();
|
||||||
System.out.print("\nPlease enter name of an existing sample plate file: ");
|
System.out.print("\nPlease enter name of an existing sample plate file: ");
|
||||||
plateFile = sc.next();
|
plateFile = sc.next();
|
||||||
|
System.out.println("\nEnable simulation of sequence read depth and sequence read errors? (y/n)");
|
||||||
|
String ans = sc.next();
|
||||||
|
Pattern pattern = Pattern.compile("(?:yes|y)", Pattern.CASE_INSENSITIVE);
|
||||||
|
Matcher matcher = pattern.matcher(ans);
|
||||||
|
if(matcher.matches()){
|
||||||
|
simulateReadDepth = true;
|
||||||
|
}
|
||||||
|
if (simulateReadDepth) {
|
||||||
|
System.out.print("\nPlease enter the read depth (the integer number of times a sequence is read): ");
|
||||||
|
readDepth = sc.nextInt();
|
||||||
|
if(readDepth < 1) {
|
||||||
|
throw new InputMismatchException("The read depth must be an integer >= 1");
|
||||||
|
}
|
||||||
|
System.out.println("\nPlease enter the read error probability (0.0 to 1.0)");
|
||||||
|
System.out.print("(The probability that a sequence will be misread): ");
|
||||||
|
readErrorRate = sc.nextDouble();
|
||||||
|
if(readErrorRate < 0.0 || readErrorRate > 1.0) {
|
||||||
|
throw new InputMismatchException("The read error probability must be in the range [0.0, 1.0]");
|
||||||
|
}
|
||||||
|
System.out.println("\nPlease enter the error collision probability (0.0 to 1.0)");
|
||||||
|
System.out.print("(The probability of a sequence being misread in a way it has been misread before): ");
|
||||||
|
errorCollisionRate = sc.nextDouble();
|
||||||
|
if(errorCollisionRate < 0.0 || errorCollisionRate > 1.0) {
|
||||||
|
throw new InputMismatchException("The error collision probability must be an in the range [0.0, 1.0]");
|
||||||
|
}
|
||||||
|
System.out.println("\nPlease enter the real sequence collision probability (0.0 to 1.0)");
|
||||||
|
System.out.print("(The probability that a (non-collision) misread produces a different, real sequence): ");
|
||||||
|
realSequenceCollisionRate = sc.nextDouble();
|
||||||
|
if(realSequenceCollisionRate < 0.0 || realSequenceCollisionRate > 1.0) {
|
||||||
|
throw new InputMismatchException("The real sequence collision probability must be an in the range [0.0, 1.0]");
|
||||||
|
}
|
||||||
|
}
|
||||||
System.out.println("\nThe graph and occupancy data will be written to a file.");
|
System.out.println("\nThe graph and occupancy data will be written to a file.");
|
||||||
System.out.print("Please enter a name for the output file: ");
|
System.out.print("Please enter a name for the output file: ");
|
||||||
filename = sc.next();
|
filename = sc.next();
|
||||||
@@ -304,7 +342,8 @@ public class InteractiveInterface {
|
|||||||
System.out.println("Returning to main menu.");
|
System.out.println("Returning to main menu.");
|
||||||
}
|
}
|
||||||
else{
|
else{
|
||||||
GraphWithMapData data = Simulator.makeGraph(cellSample, plate, true);
|
GraphWithMapData data = Simulator.makeCDR3Graph(cellSample, plate, readDepth, readErrorRate,
|
||||||
|
errorCollisionRate, realSequenceCollisionRate, true);
|
||||||
assert filename != null;
|
assert filename != null;
|
||||||
if(BiGpairSEQ.outputBinary()) {
|
if(BiGpairSEQ.outputBinary()) {
|
||||||
GraphDataObjectWriter dataWriter = new GraphDataObjectWriter(filename, data);
|
GraphDataObjectWriter dataWriter = new GraphDataObjectWriter(filename, data);
|
||||||
|
|||||||
@@ -9,27 +9,34 @@ public class MatchingResult {
|
|||||||
private final List<String> comments;
|
private final List<String> comments;
|
||||||
private final List<String> headers;
|
private final List<String> headers;
|
||||||
private final List<List<String>> allResults;
|
private final List<List<String>> allResults;
|
||||||
private final Map<Integer, Integer> matchMap;
|
private final Map<String, String> matchMap;
|
||||||
private final Duration time;
|
|
||||||
|
|
||||||
public MatchingResult(Map<String, String> metadata, List<String> headers,
|
public MatchingResult(Map<String, String> metadata, List<String> headers,
|
||||||
List<List<String>> allResults, Map<Integer, Integer>matchMap, Duration time){
|
List<List<String>> allResults, Map<String, String>matchMap){
|
||||||
/*
|
/*
|
||||||
* POSSIBLE KEYS FOR METADATA MAP ARE:
|
* POSSIBLE KEYS FOR METADATA MAP ARE:
|
||||||
* sample plate filename *
|
* sample plate filename *
|
||||||
* graph filename *
|
* graph filename *
|
||||||
|
* matching weight *
|
||||||
* well populations *
|
* well populations *
|
||||||
* total alphas found *
|
* sequence read depth *
|
||||||
* total betas found *
|
* sequence read error rate *
|
||||||
* high overlap threshold *
|
* read error collision rate *
|
||||||
* low overlap threshold *
|
* total alphas read from plate *
|
||||||
* maximum occupancy difference *
|
* total betas read from plate *
|
||||||
* minimum overlap percent *
|
* alphas in graph (after pre-filtering) *
|
||||||
|
* betas in graph (after pre-filtering) *
|
||||||
|
* high overlap threshold for pairing *
|
||||||
|
* low overlap threshold for pairing *
|
||||||
|
* maximum occupancy difference for pairing *
|
||||||
|
* minimum overlap percent for pairing *
|
||||||
* pairing attempt rate *
|
* pairing attempt rate *
|
||||||
* correct pairing count *
|
* correct pairing count *
|
||||||
* incorrect pairing count *
|
* incorrect pairing count *
|
||||||
* pairing error rate *
|
* pairing error rate *
|
||||||
* simulation time (seconds)
|
* time to generate graph (seconds) *
|
||||||
|
* time to pair sequences (seconds) *
|
||||||
|
* total simulation time (seconds) *
|
||||||
*/
|
*/
|
||||||
this.metadata = metadata;
|
this.metadata = metadata;
|
||||||
this.comments = new ArrayList<>();
|
this.comments = new ArrayList<>();
|
||||||
@@ -39,8 +46,6 @@ public class MatchingResult {
|
|||||||
this.headers = headers;
|
this.headers = headers;
|
||||||
this.allResults = allResults;
|
this.allResults = allResults;
|
||||||
this.matchMap = matchMap;
|
this.matchMap = matchMap;
|
||||||
this.time = time;
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public Map<String, String> getMetadata() {return metadata;}
|
public Map<String, String> getMetadata() {return metadata;}
|
||||||
@@ -57,13 +62,13 @@ public class MatchingResult {
|
|||||||
return headers;
|
return headers;
|
||||||
}
|
}
|
||||||
|
|
||||||
public Map<Integer, Integer> getMatchMap() {
|
public Map<String, String> getMatchMap() {
|
||||||
return matchMap;
|
return matchMap;
|
||||||
}
|
}
|
||||||
|
|
||||||
public Duration getTime() {
|
// public Duration getTime() {
|
||||||
return time;
|
// return time;
|
||||||
}
|
// }
|
||||||
|
|
||||||
public String getPlateFilename() {
|
public String getPlateFilename() {
|
||||||
return metadata.get("sample plate filename");
|
return metadata.get("sample plate filename");
|
||||||
@@ -84,20 +89,20 @@ public class MatchingResult {
|
|||||||
}
|
}
|
||||||
|
|
||||||
public Integer getAlphaCount() {
|
public Integer getAlphaCount() {
|
||||||
return Integer.parseInt(metadata.get("total alpha count"));
|
return Integer.parseInt(metadata.get("total alphas read from plate"));
|
||||||
}
|
}
|
||||||
|
|
||||||
public Integer getBetaCount() {
|
public Integer getBetaCount() {
|
||||||
return Integer.parseInt(metadata.get("total beta count"));
|
return Integer.parseInt(metadata.get("total betas read from plate"));
|
||||||
}
|
}
|
||||||
|
|
||||||
public Integer getHighOverlapThreshold() { return Integer.parseInt(metadata.get("high overlap threshold"));}
|
public Integer getHighOverlapThreshold() { return Integer.parseInt(metadata.get("high overlap threshold for pairing"));}
|
||||||
|
|
||||||
public Integer getLowOverlapThreshold() { return Integer.parseInt(metadata.get("low overlap threshold"));}
|
public Integer getLowOverlapThreshold() { return Integer.parseInt(metadata.get("low overlap threshold for pairing"));}
|
||||||
|
|
||||||
public Integer getMaxOccupancyDifference() { return Integer.parseInt(metadata.get("maximum occupancy difference"));}
|
public Integer getMaxOccupancyDifference() { return Integer.parseInt(metadata.get("maximum occupancy difference for pairing"));}
|
||||||
|
|
||||||
public Integer getMinOverlapPercent() { return Integer.parseInt(metadata.get("minimum overlap percent"));}
|
public Integer getMinOverlapPercent() { return Integer.parseInt(metadata.get("minimum overlap percent for pairing"));}
|
||||||
|
|
||||||
public Double getPairingAttemptRate() { return Double.parseDouble(metadata.get("pairing attempt rate"));}
|
public Double getPairingAttemptRate() { return Double.parseDouble(metadata.get("pairing attempt rate"));}
|
||||||
|
|
||||||
@@ -107,6 +112,6 @@ public class MatchingResult {
|
|||||||
|
|
||||||
public Double getPairingErrorRate() { return Double.parseDouble(metadata.get("pairing error rate"));}
|
public Double getPairingErrorRate() { return Double.parseDouble(metadata.get("pairing error rate"));}
|
||||||
|
|
||||||
public String getSimulationTime() { return metadata.get("simulation time (seconds)"); }
|
public String getSimulationTime() { return metadata.get("total simulation time (seconds)"); }
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,16 +2,24 @@
|
|||||||
|
|
||||||
/*
|
/*
|
||||||
TODO: Implement exponential distribution using inversion method - DONE
|
TODO: Implement exponential distribution using inversion method - DONE
|
||||||
|
TODO: Implement collisions with real sequences by having the counting function keep a map of all sequences it's read,
|
||||||
|
with values of all misreads. Can then have a spurious/real collision rate, which will have count randomly select a sequence
|
||||||
|
it's already read at least once, and put that into the list of spurious sequences for the given real sequence. Will let me get rid
|
||||||
|
of the distinctMisreadCount map, and use this new map instead. Doing it this way, once a sequence has been misread as another
|
||||||
|
sequence once, it is more likely to be misread that way again, as future read error collisions can also be real sequence collisions
|
||||||
|
Prob A: a read error occurs. Prob B: it's a new error (otherwise it's a repeated error). Prob C: if new error, prob that it's
|
||||||
|
a real sequence collision (otherwise it's a new spurious sequence) - DONE
|
||||||
TODO: Implement discrete frequency distributions using Vose's Alias Method
|
TODO: Implement discrete frequency distributions using Vose's Alias Method
|
||||||
*/
|
*/
|
||||||
|
|
||||||
|
|
||||||
import java.util.*;
|
import java.util.*;
|
||||||
|
|
||||||
public class Plate {
|
public class Plate {
|
||||||
private CellSample cells;
|
private CellSample cells;
|
||||||
private String sourceFile;
|
private String sourceFile;
|
||||||
private String filename;
|
private String filename;
|
||||||
private List<List<Integer[]>> wells;
|
private List<List<String[]>> wells;
|
||||||
private final Random rand = BiGpairSEQ.getRand();
|
private final Random rand = BiGpairSEQ.getRand();
|
||||||
private int size;
|
private int size;
|
||||||
private double error;
|
private double error;
|
||||||
@@ -48,13 +56,13 @@ public class Plate {
|
|||||||
}
|
}
|
||||||
|
|
||||||
//constructor for returning a Plate from a PlateFileReader
|
//constructor for returning a Plate from a PlateFileReader
|
||||||
public Plate(String filename, List<List<Integer[]>> wells) {
|
public Plate(String filename, List<List<String[]>> wells) {
|
||||||
this.filename = filename;
|
this.filename = filename;
|
||||||
this.wells = wells;
|
this.wells = wells;
|
||||||
this.size = wells.size();
|
this.size = wells.size();
|
||||||
|
|
||||||
List<Integer> concentrations = new ArrayList<>();
|
List<Integer> concentrations = new ArrayList<>();
|
||||||
for (List<Integer[]> w: wells) {
|
for (List<String[]> w: wells) {
|
||||||
if(!concentrations.contains(w.size())){
|
if(!concentrations.contains(w.size())){
|
||||||
concentrations.add(w.size());
|
concentrations.add(w.size());
|
||||||
}
|
}
|
||||||
@@ -65,7 +73,7 @@ public class Plate {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private void fillWellsExponential(List<Integer[]> cells, double lambda){
|
private void fillWellsExponential(List<String[]> cells, double lambda){
|
||||||
this.lambda = lambda;
|
this.lambda = lambda;
|
||||||
exponential = true;
|
exponential = true;
|
||||||
int numSections = populations.length;
|
int numSections = populations.length;
|
||||||
@@ -74,17 +82,17 @@ public class Plate {
|
|||||||
int n;
|
int n;
|
||||||
while (section < numSections){
|
while (section < numSections){
|
||||||
for (int i = 0; i < (size / numSections); i++) {
|
for (int i = 0; i < (size / numSections); i++) {
|
||||||
List<Integer[]> well = new ArrayList<>();
|
List<String[]> well = new ArrayList<>();
|
||||||
for (int j = 0; j < populations[section]; j++) {
|
for (int j = 0; j < populations[section]; j++) {
|
||||||
do {
|
do {
|
||||||
//inverse transform sampling: for random number u in [0,1), x = log(1-u) / (-lambda)
|
//inverse transform sampling: for random number u in [0,1), x = log(1-u) / (-lambda)
|
||||||
m = (Math.log10((1 - rand.nextDouble()))/(-lambda)) * Math.sqrt(cells.size());
|
m = (Math.log10((1 - rand.nextDouble()))/(-lambda)) * Math.sqrt(cells.size());
|
||||||
} while (m >= cells.size() || m < 0);
|
} while (m >= cells.size() || m < 0);
|
||||||
n = (int) Math.floor(m);
|
n = (int) Math.floor(m);
|
||||||
Integer[] cellToAdd = cells.get(n).clone();
|
String[] cellToAdd = cells.get(n).clone();
|
||||||
for(int k = 0; k < cellToAdd.length; k++){
|
for(int k = 0; k < cellToAdd.length; k++){
|
||||||
if(Math.abs(rand.nextDouble()) < error){//error applied to each seqeunce
|
if(Math.abs(rand.nextDouble()) <= error){//error applied to each sequence
|
||||||
cellToAdd[k] = -1;
|
cellToAdd[k] = "-1";
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
well.add(cellToAdd);
|
well.add(cellToAdd);
|
||||||
@@ -95,7 +103,7 @@ public class Plate {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private void fillWells( List<Integer[]> cells, double stdDev) {
|
private void fillWells( List<String[]> cells, double stdDev) {
|
||||||
this.stdDev = stdDev;
|
this.stdDev = stdDev;
|
||||||
int numSections = populations.length;
|
int numSections = populations.length;
|
||||||
int section = 0;
|
int section = 0;
|
||||||
@@ -103,16 +111,16 @@ public class Plate {
|
|||||||
int n;
|
int n;
|
||||||
while (section < numSections){
|
while (section < numSections){
|
||||||
for (int i = 0; i < (size / numSections); i++) {
|
for (int i = 0; i < (size / numSections); i++) {
|
||||||
List<Integer[]> well = new ArrayList<>();
|
List<String[]> well = new ArrayList<>();
|
||||||
for (int j = 0; j < populations[section]; j++) {
|
for (int j = 0; j < populations[section]; j++) {
|
||||||
do {
|
do {
|
||||||
m = (rand.nextGaussian() * stdDev) + (cells.size() / 2);
|
m = (rand.nextGaussian() * stdDev) + (cells.size() / 2);
|
||||||
} while (m >= cells.size() || m < 0);
|
} while (m >= cells.size() || m < 0);
|
||||||
n = (int) Math.floor(m);
|
n = (int) Math.floor(m);
|
||||||
Integer[] cellToAdd = cells.get(n).clone();
|
String[] cellToAdd = cells.get(n).clone();
|
||||||
for(int k = 0; k < cellToAdd.length; k++){
|
for(int k = 0; k < cellToAdd.length; k++){
|
||||||
if(Math.abs(rand.nextDouble()) < error){//error applied to each sequence
|
if(Math.abs(rand.nextDouble()) < error){//error applied to each sequence
|
||||||
cellToAdd[k] = -1;
|
cellToAdd[k] = "-1";
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
well.add(cellToAdd);
|
well.add(cellToAdd);
|
||||||
@@ -143,38 +151,107 @@ public class Plate {
|
|||||||
return error;
|
return error;
|
||||||
}
|
}
|
||||||
|
|
||||||
public List<List<Integer[]>> getWells() {
|
public List<List<String[]>> getWells() {
|
||||||
return wells;
|
return wells;
|
||||||
}
|
}
|
||||||
|
|
||||||
//returns a map of the counts of the sequence at cell index sIndex, in all wells
|
//For the sequences at cell indices sIndices, counts number of unique sequences in all wells.
|
||||||
public Map<Integer, Integer> assayWellsSequenceS(int... sIndices){
|
//Also simulates sequence read errors with given probabilities.
|
||||||
return this.assayWellsSequenceS(0, size, sIndices);
|
//Returns a map of SequenceRecords containing plate data for all sequences read.
|
||||||
}
|
//TODO actually implement usage of misreadSequences - DONE
|
||||||
|
public Map<String, SequenceRecord> countSequences(Integer readDepth, Double readErrorRate,
|
||||||
//returns a map of the counts of the sequence at cell index sIndex, in a specific well
|
Double errorCollisionRate, Double realSequenceCollisionRate, int... sIndices) {
|
||||||
public Map<Integer, Integer> assayWellsSequenceS(int n, int... sIndices) { return this.assayWellsSequenceS(n, n+1, sIndices);}
|
SequenceType[] sequenceTypes = EnumSet.allOf(SequenceType.class).toArray(new SequenceType[0]);
|
||||||
|
//Map of all real sequences read. Keys are sequences, values are ways sequence has been misread.
|
||||||
//returns a map of the counts of the sequence at cell index sIndex, in a range of wells
|
Map<String, List<String>> sequencesAndMisreads = new HashMap<>();
|
||||||
public Map<Integer, Integer> assayWellsSequenceS(int start, int end, int... sIndices) {
|
//Map of all sequences read. Keys are sequences, values are associated SequenceRecords
|
||||||
Map<Integer,Integer> assay = new HashMap<>();
|
Map<String, SequenceRecord> sequenceMap = new LinkedHashMap<>();
|
||||||
for(int sIndex: sIndices){
|
//get list of all distinct, real sequences
|
||||||
for(int i = start; i < end; i++){
|
String[] realSequences = assayWells(sIndices).toArray(new String[0]);
|
||||||
countSequences(assay, wells.get(i), sIndex);
|
for (int well = 0; well < size; well++) {
|
||||||
}
|
for (String[] cell: wells.get(well)) {
|
||||||
}
|
for (int sIndex: sIndices) {
|
||||||
return assay;
|
//the sequence being read
|
||||||
}
|
String currentSequence = cell[sIndex];
|
||||||
//For the sequences at cell indices sIndices, counts number of unique sequences in the given well into the given map
|
|
||||||
private void countSequences(Map<Integer, Integer> wellMap, List<Integer[]> well, int... sIndices) {
|
|
||||||
for(Integer[] cell : well) {
|
|
||||||
for(int sIndex: sIndices){
|
|
||||||
//skip dropout sequences, which have value -1
|
//skip dropout sequences, which have value -1
|
||||||
if(cell[sIndex] != -1){
|
if (!"-1".equals(currentSequence)) {
|
||||||
wellMap.merge(cell[sIndex], 1, (oldValue, newValue) -> oldValue + newValue);
|
//keep rereading the sequence until the read depth is reached
|
||||||
|
for (int j = 0; j < readDepth; j++) {
|
||||||
|
//The sequence is misread
|
||||||
|
if (rand.nextDouble() < readErrorRate) {
|
||||||
|
//The sequence hasn't been read or misread before
|
||||||
|
if (!sequencesAndMisreads.containsKey(currentSequence)) {
|
||||||
|
sequencesAndMisreads.put(currentSequence, new ArrayList<>());
|
||||||
|
}
|
||||||
|
//The specific misread hasn't happened before
|
||||||
|
if (rand.nextDouble() >= errorCollisionRate || sequencesAndMisreads.get(currentSequence).size() == 0) {
|
||||||
|
//The misread doesn't collide with a real sequence already on the plate and some sequences have already been read
|
||||||
|
if(rand.nextDouble() >= realSequenceCollisionRate || !sequenceMap.isEmpty()){
|
||||||
|
StringBuilder spurious = new StringBuilder(currentSequence);
|
||||||
|
for (int k = 0; k <= sequencesAndMisreads.get(currentSequence).size(); k++) {
|
||||||
|
spurious.append("*");
|
||||||
|
}
|
||||||
|
//New sequence record for the spurious sequence
|
||||||
|
SequenceRecord tmp = new SequenceRecord(spurious.toString(), sequenceTypes[sIndex]);
|
||||||
|
tmp.addRead(well);
|
||||||
|
sequenceMap.put(spurious.toString(), tmp);
|
||||||
|
//add spurious sequence to list of misreads for the real sequence
|
||||||
|
sequencesAndMisreads.get(currentSequence).add(spurious.toString());
|
||||||
|
}
|
||||||
|
//The misread collides with a real sequence already read from plate
|
||||||
|
else {
|
||||||
|
String wrongSequence;
|
||||||
|
do{
|
||||||
|
//get a random real sequence that's been read from the plate before
|
||||||
|
int index = rand.nextInt(realSequences.length);
|
||||||
|
wrongSequence = realSequences[index];
|
||||||
|
//make sure it's not accidentally the *right* sequence
|
||||||
|
//Also that it's not a wrong sequence already in the misread list
|
||||||
|
} while(currentSequence.equals(wrongSequence) || sequencesAndMisreads.get(currentSequence).contains(wrongSequence));
|
||||||
|
//update the SequenceRecord for wrongSequence
|
||||||
|
sequenceMap.get(wrongSequence).addRead(well);
|
||||||
|
//add wrongSequence to the misreads for currentSequence
|
||||||
|
sequencesAndMisreads.get(currentSequence).add(wrongSequence);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
//The sequence is read correctly
|
||||||
|
else {
|
||||||
|
//the sequence hasn't been read before
|
||||||
|
if (!sequenceMap.containsKey(currentSequence)) {
|
||||||
|
//create new record for the sequence
|
||||||
|
SequenceRecord tmp = new SequenceRecord(currentSequence, sequenceTypes[sIndex]);
|
||||||
|
//add this read to the sequence record
|
||||||
|
tmp.addRead(well);
|
||||||
|
//add the sequence and its record to the sequence map
|
||||||
|
sequenceMap.put(currentSequence, tmp);
|
||||||
|
//add the sequence to the sequences and misreads map
|
||||||
|
sequencesAndMisreads.put(currentSequence, new ArrayList<>());
|
||||||
|
}
|
||||||
|
//the sequence has been read before
|
||||||
|
else {
|
||||||
|
//get the sequence's record and add this read to it
|
||||||
|
sequenceMap.get(currentSequence).addRead(well);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return sequenceMap;
|
||||||
|
}
|
||||||
|
|
||||||
|
private HashSet<String> assayWells(int[] indices) {
|
||||||
|
HashSet<String> allSequences = new HashSet<>();
|
||||||
|
for (List<String[]> well: wells) {
|
||||||
|
for (String[] cell: well) {
|
||||||
|
for(int index: indices) {
|
||||||
|
allSequences.add(cell[index]);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return allSequences;
|
||||||
}
|
}
|
||||||
|
|
||||||
public String getSourceFileName() {
|
public String getSourceFileName() {
|
||||||
|
|||||||
@@ -13,7 +13,7 @@ import java.util.regex.Pattern;
|
|||||||
|
|
||||||
public class PlateFileReader {
|
public class PlateFileReader {
|
||||||
|
|
||||||
private List<List<Integer[]>> wells = new ArrayList<>();
|
private List<List<String[]>> wells = new ArrayList<>();
|
||||||
private String filename;
|
private String filename;
|
||||||
|
|
||||||
public PlateFileReader(String filename){
|
public PlateFileReader(String filename){
|
||||||
@@ -32,17 +32,17 @@ public class PlateFileReader {
|
|||||||
CSVParser parser = new CSVParser(reader, plateFileFormat);
|
CSVParser parser = new CSVParser(reader, plateFileFormat);
|
||||||
){
|
){
|
||||||
for(CSVRecord record: parser.getRecords()) {
|
for(CSVRecord record: parser.getRecords()) {
|
||||||
List<Integer[]> well = new ArrayList<>();
|
List<String[]> well = new ArrayList<>();
|
||||||
for(String s: record) {
|
for(String s: record) {
|
||||||
if(!"".equals(s)) {
|
if(!"".equals(s)) {
|
||||||
String[] intString = s.replaceAll("\\[", "")
|
String[] sequences = s.replaceAll("\\[", "")
|
||||||
.replaceAll("]", "")
|
.replaceAll("]", "")
|
||||||
.replaceAll(" ", "")
|
.replaceAll(" ", "")
|
||||||
.split(",");
|
.split(",");
|
||||||
//System.out.println(intString);
|
//System.out.println(sequences);
|
||||||
Integer[] arr = new Integer[intString.length];
|
String[] arr = new String[sequences.length];
|
||||||
for (int i = 0; i < intString.length; i++) {
|
for (int i = 0; i < sequences.length; i++) {
|
||||||
arr[i] = Integer.valueOf(intString[i]);
|
arr[i] = sequences[i];
|
||||||
}
|
}
|
||||||
well.add(arr);
|
well.add(arr);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ import java.util.*;
|
|||||||
|
|
||||||
public class PlateFileWriter {
|
public class PlateFileWriter {
|
||||||
private int size;
|
private int size;
|
||||||
private List<List<Integer[]>> wells;
|
private List<List<String[]>> wells;
|
||||||
private double stdDev;
|
private double stdDev;
|
||||||
private double lambda;
|
private double lambda;
|
||||||
private Double error;
|
private Double error;
|
||||||
@@ -40,13 +40,13 @@ public class PlateFileWriter {
|
|||||||
}
|
}
|
||||||
|
|
||||||
public void writePlateFile(){
|
public void writePlateFile(){
|
||||||
Comparator<List<Integer[]>> listLengthDescending = Comparator.comparingInt(List::size);
|
Comparator<List<String[]>> listLengthDescending = Comparator.comparingInt(List::size);
|
||||||
wells.sort(listLengthDescending.reversed());
|
wells.sort(listLengthDescending.reversed());
|
||||||
int maxLength = wells.get(0).size();
|
int maxLength = wells.get(0).size();
|
||||||
List<List<String>> wellsAsStrings = new ArrayList<>();
|
List<List<String>> wellsAsStrings = new ArrayList<>();
|
||||||
for (List<Integer[]> w: wells){
|
for (List<String[]> w: wells){
|
||||||
List<String> tmp = new ArrayList<>();
|
List<String> tmp = new ArrayList<>();
|
||||||
for(Integer[] c: w) {
|
for(String[] c: w) {
|
||||||
tmp.add(Arrays.toString(c));
|
tmp.add(Arrays.toString(c));
|
||||||
}
|
}
|
||||||
wellsAsStrings.add(tmp);
|
wellsAsStrings.add(tmp);
|
||||||
|
|||||||
65
src/main/java/SequenceRecord.java
Normal file
65
src/main/java/SequenceRecord.java
Normal file
@@ -0,0 +1,65 @@
|
|||||||
|
/*
|
||||||
|
Class to represent individual sequences, holding their well occupancy and read count information.
|
||||||
|
Will make a map of these keyed to the sequences themselves.
|
||||||
|
Ideally, I'll be able to construct both the Vertices and the weights matrix from this map.
|
||||||
|
|
||||||
|
*/
|
||||||
|
|
||||||
|
import java.io.Serializable;
|
||||||
|
import java.util.*;
|
||||||
|
|
||||||
|
public class SequenceRecord implements Serializable {
|
||||||
|
private final String sequence;
|
||||||
|
private final SequenceType type;
|
||||||
|
//keys are well numbers, values are read count in that well
|
||||||
|
private final Map<Integer, Integer> wells;
|
||||||
|
|
||||||
|
public SequenceRecord (String sequence, SequenceType type) {
|
||||||
|
this.sequence = sequence;
|
||||||
|
this.type = type;
|
||||||
|
this.wells = new LinkedHashMap<>();
|
||||||
|
}
|
||||||
|
|
||||||
|
//this shouldn't be necessary, since the sequence will be the map key, but
|
||||||
|
public String getSequence() {
|
||||||
|
return sequence;
|
||||||
|
}
|
||||||
|
|
||||||
|
public SequenceType getSequenceType(){
|
||||||
|
return type;
|
||||||
|
}
|
||||||
|
|
||||||
|
//use this to update the record for each new read
|
||||||
|
public void addRead(Integer wellNumber) {
|
||||||
|
wells.merge(wellNumber,1, Integer::sum);
|
||||||
|
}
|
||||||
|
|
||||||
|
//don't know if I'll ever need this
|
||||||
|
public void addWellData(Integer wellNumber, Integer readCount) {
|
||||||
|
wells.put(wellNumber, readCount);
|
||||||
|
}
|
||||||
|
|
||||||
|
public Set<Integer> getWells() {
|
||||||
|
return wells.keySet();
|
||||||
|
}
|
||||||
|
|
||||||
|
public Map<Integer, Integer> getWellOccupancies() { return wells;}
|
||||||
|
|
||||||
|
public boolean isInWell(Integer wellNumber) {
|
||||||
|
return wells.containsKey(wellNumber);
|
||||||
|
}
|
||||||
|
|
||||||
|
public Integer getOccupancy() {
|
||||||
|
return wells.size();
|
||||||
|
}
|
||||||
|
|
||||||
|
//read count for whole plate
|
||||||
|
public Integer getReadCount(){
|
||||||
|
return wells.values().stream().mapToInt(Integer::valueOf).sum();
|
||||||
|
}
|
||||||
|
|
||||||
|
//read count in a specific well
|
||||||
|
public Integer getReadCount(Integer wellNumber) {
|
||||||
|
return wells.get(wellNumber);
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -12,97 +12,101 @@ import java.text.NumberFormat;
|
|||||||
import java.time.Instant;
|
import java.time.Instant;
|
||||||
import java.time.Duration;
|
import java.time.Duration;
|
||||||
import java.util.*;
|
import java.util.*;
|
||||||
import java.util.stream.IntStream;
|
/*
|
||||||
|
Refactor notes
|
||||||
|
What would be necessary to do everything with only one scan through the sample plate?
|
||||||
|
I would need to keep a list of sequences (real and spurious), and metadata about each sequence.
|
||||||
|
I would need the data:
|
||||||
|
* # of each well the sequence appears in
|
||||||
|
* Read count in that well
|
||||||
|
*/
|
||||||
|
|
||||||
import static java.lang.Float.*;
|
|
||||||
|
|
||||||
//NOTE: "sequence" in method and variable names refers to a peptide sequence from a simulated T cell
|
//NOTE: "sequence" in method and variable names refers to a peptide sequence from a simulated T cell
|
||||||
public class Simulator implements GraphModificationFunctions {
|
public class Simulator implements GraphModificationFunctions {
|
||||||
|
|
||||||
|
|
||||||
//Make the graph needed for matching sequences.
|
public static GraphWithMapData makeCDR3Graph(CellSample cellSample, Plate samplePlate, int readDepth,
|
||||||
//sourceVertexIndices and targetVertexIndices are indices within the cell to use as for the two sets of vertices
|
double readErrorRate, double errorCollisionRate,
|
||||||
//in the bipartite graph. "Source" and "target" are JGraphT terms for the two vertices an edge touches,
|
double realSequenceCollisionRate, boolean verbose) {
|
||||||
//even if not directed.
|
//start timing
|
||||||
public static GraphWithMapData makeGraph(CellSample cellSample, Plate samplePlate, boolean verbose) {
|
|
||||||
Instant start = Instant.now();
|
Instant start = Instant.now();
|
||||||
List<Integer[]> distinctCells = cellSample.getCells();
|
|
||||||
int[] alphaIndices = {SequenceType.CDR3_ALPHA.ordinal()};
|
int[] alphaIndices = {SequenceType.CDR3_ALPHA.ordinal()};
|
||||||
int[] betaIndices = {SequenceType.CDR3_BETA.ordinal()};
|
int[] betaIndices = {SequenceType.CDR3_BETA.ordinal()};
|
||||||
|
List<String[]> distinctCells = cellSample.getCells();
|
||||||
int numWells = samplePlate.getSize();
|
int numWells = samplePlate.getSize();
|
||||||
|
|
||||||
|
//Make a hashmap keyed to alphas, values are associated betas.
|
||||||
if(verbose){System.out.println("Making cell maps");}
|
if(verbose){System.out.println("Making cell maps");}
|
||||||
//HashMap keyed to Alphas, values Betas
|
Map<String, String> distCellsMapAlphaKey = makeSequenceToSequenceMap(distinctCells,
|
||||||
Map<Integer, Integer> distCellsMapAlphaKey = makeSequenceToSequenceMap(distinctCells, 0, 1);
|
SequenceType.CDR3_ALPHA.ordinal(), SequenceType.CDR3_BETA.ordinal());
|
||||||
if(verbose){System.out.println("Cell maps made");}
|
if(verbose){System.out.println("Cell maps made");}
|
||||||
|
|
||||||
if(verbose){System.out.println("Making well maps");}
|
//Make linkedHashMap keyed to sequences, values are SequenceRecords reflecting plate statistics
|
||||||
|
if(verbose){System.out.println("Making sample plate sequence maps");}
|
||||||
|
Map<String, SequenceRecord> alphaSequences = samplePlate.countSequences(readDepth, readErrorRate,
|
||||||
|
errorCollisionRate, realSequenceCollisionRate, alphaIndices);
|
||||||
|
int alphaCount = alphaSequences.size();
|
||||||
|
if(verbose){System.out.println("Alphas sequences read: " + alphaCount);}
|
||||||
|
Map<String, SequenceRecord> betaSequences = samplePlate.countSequences(readDepth, readErrorRate,
|
||||||
|
errorCollisionRate, realSequenceCollisionRate, betaIndices);
|
||||||
|
int betaCount = betaSequences.size();
|
||||||
|
if(verbose){System.out.println("Betas sequences read: " + betaCount);}
|
||||||
|
if(verbose){System.out.println("Sample plate sequence maps made");}
|
||||||
|
|
||||||
Map<Integer, Integer> allAlphas = samplePlate.assayWellsSequenceS(alphaIndices);
|
//pre-filter saturating sequences and sequences likely to be misreads
|
||||||
Map<Integer, Integer> allBetas = samplePlate.assayWellsSequenceS(betaIndices);
|
|
||||||
int alphaCount = allAlphas.size();
|
|
||||||
if(verbose){System.out.println("All alphas count: " + alphaCount);}
|
|
||||||
int betaCount = allBetas.size();
|
|
||||||
if(verbose){System.out.println("All betas count: " + betaCount);}
|
|
||||||
if(verbose){System.out.println("Well maps made");}
|
|
||||||
|
|
||||||
//ideally we wouldn't do any graph pre-filtering. But sequences present in all wells add a huge number of edges to the graph and don't carry any signal value
|
|
||||||
if(verbose){System.out.println("Removing sequences present in all wells.");}
|
if(verbose){System.out.println("Removing sequences present in all wells.");}
|
||||||
filterByOccupancyThresholds(allAlphas, 1, numWells - 1);
|
filterByOccupancyThresholds(alphaSequences, 1, numWells - 1);
|
||||||
filterByOccupancyThresholds(allBetas, 1, numWells - 1);
|
filterByOccupancyThresholds(betaSequences, 1, numWells - 1);
|
||||||
if(verbose){System.out.println("Sequences removed");}
|
if(verbose){System.out.println("Sequences removed");}
|
||||||
int pairableAlphaCount = allAlphas.size();
|
if(verbose){System.out.println("Remaining alpha sequence count: " + alphaSequences.size());}
|
||||||
if(verbose){System.out.println("Remaining alphas count: " + pairableAlphaCount);}
|
if(verbose){System.out.println("Remaining beta sequence count: " + betaSequences.size());}
|
||||||
int pairableBetaCount = allBetas.size();
|
if (readDepth > 1) {
|
||||||
if(verbose){System.out.println("Remaining betas count: " + pairableBetaCount);}
|
if(verbose){System.out.println("Removing sequences with disparate occupancies and read counts");}
|
||||||
|
filterByOccupancyAndReadCount(alphaSequences, readDepth);
|
||||||
|
filterByOccupancyAndReadCount(betaSequences, readDepth);
|
||||||
|
if(verbose){System.out.println("Sequences removed");}
|
||||||
|
if(verbose){System.out.println("Remaining alpha sequence count: " + alphaSequences.size());}
|
||||||
|
if(verbose){System.out.println("Remaining beta sequence count: " + betaSequences.size());}
|
||||||
|
}
|
||||||
|
int pairableAlphaCount = alphaSequences.size();
|
||||||
|
if(verbose){System.out.println("Remaining alpha sequence count: " + pairableAlphaCount);}
|
||||||
|
int pairableBetaCount = betaSequences.size();
|
||||||
|
if(verbose){System.out.println("Remaining beta sequence count: " + pairableBetaCount);}
|
||||||
|
|
||||||
|
//construct the graph. For simplicity, going to make
|
||||||
if(verbose){System.out.println("Making vertex maps");}
|
if(verbose){System.out.println("Making vertex maps");}
|
||||||
//For the SimpleWeightedBipartiteGraphMatrixGenerator, all vertices must have
|
//For the SimpleWeightedBipartiteGraphMatrixGenerator, all vertices must have
|
||||||
//distinct numbers associated with them. Since I'm using a 2D array, that means
|
//distinct numbers associated with them. Since I'm using a 2D array, that means
|
||||||
//distinct indices between the rows and columns. vertexStartValue lets me track where I switch
|
//distinct indices between the rows and columns. vertexStartValue lets me track where I switch
|
||||||
//from numbering rows to columns, so I can assign unique numbers to every vertex, and then
|
//from numbering rows to columns, so I can assign unique numbers to every vertex, and then
|
||||||
//subtract the vertexStartValue from betas to use their vertex labels as array indices
|
//subtract the vertexStartValue from betas to use their vertex labels as array indices
|
||||||
Integer vertexStartValue = 0;
|
int vertexStartValue = 0;
|
||||||
//keys are sequential integer vertices, values are alphas
|
//keys are sequential integer vertices, values are alphas
|
||||||
Map<Integer, Integer> plateVtoAMap = makeVertexToSequenceMap(allAlphas, vertexStartValue);
|
Map<String, Integer> plateAtoVMap = makeSequenceToVertexMap(alphaSequences, vertexStartValue);
|
||||||
//new start value for vertex to beta map should be one more than final vertex value in alpha map
|
//new start value for vertex to beta map should be one more than final vertex value in alpha map
|
||||||
vertexStartValue += plateVtoAMap.size();
|
vertexStartValue += plateAtoVMap.size();
|
||||||
//keys are sequential integers vertices, values are betas
|
//keys are betas, values are sequential integers
|
||||||
Map<Integer, Integer> plateVtoBMap = makeVertexToSequenceMap(allBetas, vertexStartValue);
|
Map<String, Integer> plateBtoVMap = makeSequenceToVertexMap(betaSequences, vertexStartValue);
|
||||||
//keys are alphas, values are sequential integer vertices from previous map
|
|
||||||
Map<Integer, Integer> plateAtoVMap = invertVertexMap(plateVtoAMap);
|
|
||||||
//keys are betas, values are sequential integer vertices from previous map
|
|
||||||
Map<Integer, Integer> plateBtoVMap = invertVertexMap(plateVtoBMap);
|
|
||||||
if(verbose){System.out.println("Vertex maps made");}
|
if(verbose){System.out.println("Vertex maps made");}
|
||||||
|
|
||||||
//make adjacency matrix for bipartite graph generator
|
//make adjacency matrix for bipartite graph generator
|
||||||
//(technically this is only 1/4 of an adjacency matrix, but that's all you need
|
//(technically this is only 1/4 of an adjacency matrix, but that's all you need
|
||||||
//for a bipartite graph, and all the SimpleWeightedBipartiteGraphMatrixGenerator class expects.)
|
//for a bipartite graph, and all the SimpleWeightedBipartiteGraphMatrixGenerator class expects.)
|
||||||
if(verbose){System.out.println("Creating adjacency matrix");}
|
if(verbose){System.out.println("Making adjacency matrix");}
|
||||||
//Count how many wells each alpha sequence appears in
|
double[][] weights = new double[plateAtoVMap.size()][plateBtoVMap.size()];
|
||||||
Map<Integer, Integer> alphaWellCounts = new HashMap<>();
|
fillAdjacencyMatrix(weights, vertexStartValue, alphaSequences, betaSequences, plateAtoVMap, plateBtoVMap);
|
||||||
//count how many wells each beta sequence appears in
|
if(verbose){System.out.println("Adjacency matrix made");}
|
||||||
Map<Integer, Integer> betaWellCounts = new HashMap<>();
|
//make bipartite graph
|
||||||
//the adjacency matrix to be used by the graph generator
|
if(verbose){System.out.println("Making bipartite weighted graph");}
|
||||||
double[][] weights = new double[plateVtoAMap.size()][plateVtoBMap.size()];
|
|
||||||
countSequencesAndFillMatrix(samplePlate, allAlphas, allBetas, plateAtoVMap,
|
|
||||||
plateBtoVMap, alphaIndices, betaIndices, alphaWellCounts, betaWellCounts, weights);
|
|
||||||
if(verbose){System.out.println("Matrix created");}
|
|
||||||
|
|
||||||
//create bipartite graph
|
|
||||||
if(verbose){System.out.println("Creating graph");}
|
|
||||||
//the graph object
|
//the graph object
|
||||||
SimpleWeightedGraph<Vertex, DefaultWeightedEdge> graph =
|
SimpleWeightedGraph<Vertex, DefaultWeightedEdge> graph =
|
||||||
new SimpleWeightedGraph<>(DefaultWeightedEdge.class);
|
new SimpleWeightedGraph<>(DefaultWeightedEdge.class);
|
||||||
//the graph generator
|
//the graph generator
|
||||||
SimpleWeightedBipartiteGraphMatrixGenerator graphGenerator = new SimpleWeightedBipartiteGraphMatrixGenerator();
|
SimpleWeightedBipartiteGraphMatrixGenerator graphGenerator = new SimpleWeightedBipartiteGraphMatrixGenerator();
|
||||||
//the list of alpha vertices
|
//the list of alpha vertices
|
||||||
//List<Integer> alphaVertices = new ArrayList<>(plateVtoAMap.keySet()); //This will work because LinkedHashMap preserves order of entry
|
|
||||||
List<Vertex> alphaVertices = new ArrayList<>();
|
List<Vertex> alphaVertices = new ArrayList<>();
|
||||||
//start with map of all alphas mapped to vertex values, get occupancy from the alphaWellCounts map
|
for (String seq : plateAtoVMap.keySet()) {
|
||||||
for (Integer seq : plateAtoVMap.keySet()) {
|
Vertex alphaVertex = new Vertex(alphaSequences.get(seq), plateAtoVMap.get(seq));
|
||||||
Vertex alphaVertex = new Vertex(SequenceType.CDR3_ALPHA, seq, alphaWellCounts.get(seq), plateAtoVMap.get(seq));
|
|
||||||
alphaVertices.add(alphaVertex);
|
alphaVertices.add(alphaVertex);
|
||||||
}
|
}
|
||||||
//Sort to make sure the order of vertices in list matches the order of the adjacency matrix
|
//Sort to make sure the order of vertices in list matches the order of the adjacency matrix
|
||||||
@@ -110,10 +114,9 @@ public class Simulator implements GraphModificationFunctions {
|
|||||||
//Add ordered list of vertices to the graph
|
//Add ordered list of vertices to the graph
|
||||||
graphGenerator.first(alphaVertices);
|
graphGenerator.first(alphaVertices);
|
||||||
//the list of beta vertices
|
//the list of beta vertices
|
||||||
//List<Integer> betaVertices = new ArrayList<>(plateVtoBMap.keySet());//This will work because LinkedHashMap preserves order of entry
|
|
||||||
List<Vertex> betaVertices = new ArrayList<>();
|
List<Vertex> betaVertices = new ArrayList<>();
|
||||||
for (Integer seq : plateBtoVMap.keySet()) {
|
for (String seq : plateBtoVMap.keySet()) {
|
||||||
Vertex betaVertex = new Vertex(SequenceType.CDR3_BETA, seq, betaWellCounts.get(seq), plateBtoVMap.get(seq));
|
Vertex betaVertex = new Vertex(betaSequences.get(seq), plateBtoVMap.get(seq));
|
||||||
betaVertices.add(betaVertex);
|
betaVertices.add(betaVertex);
|
||||||
}
|
}
|
||||||
//Sort to make sure the order of vertices in list matches the order of the adjacency matrix
|
//Sort to make sure the order of vertices in list matches the order of the adjacency matrix
|
||||||
@@ -124,12 +127,12 @@ public class Simulator implements GraphModificationFunctions {
|
|||||||
graphGenerator.weights(weights);
|
graphGenerator.weights(weights);
|
||||||
graphGenerator.generateGraph(graph);
|
graphGenerator.generateGraph(graph);
|
||||||
if(verbose){System.out.println("Graph created");}
|
if(verbose){System.out.println("Graph created");}
|
||||||
|
//stop timing
|
||||||
Instant stop = Instant.now();
|
Instant stop = Instant.now();
|
||||||
Duration time = Duration.between(start, stop);
|
Duration time = Duration.between(start, stop);
|
||||||
|
|
||||||
//create GraphWithMapData object
|
//create GraphWithMapData object
|
||||||
GraphWithMapData output = new GraphWithMapData(graph, numWells, samplePlate.getPopulations(), distCellsMapAlphaKey, alphaCount, betaCount, time);
|
GraphWithMapData output = new GraphWithMapData(graph, numWells, samplePlate.getPopulations(), distCellsMapAlphaKey,
|
||||||
|
alphaCount, betaCount, readDepth, readErrorRate, errorCollisionRate, realSequenceCollisionRate, time);
|
||||||
//Set source file name in graph to name of sample plate
|
//Set source file name in graph to name of sample plate
|
||||||
output.setSourceFilename(samplePlate.getFilename());
|
output.setSourceFilename(samplePlate.getFilename());
|
||||||
//return GraphWithMapData object
|
//return GraphWithMapData object
|
||||||
@@ -147,7 +150,7 @@ public class Simulator implements GraphModificationFunctions {
|
|||||||
int numWells = data.getNumWells();
|
int numWells = data.getNumWells();
|
||||||
//Integer alphaCount = data.getAlphaCount();
|
//Integer alphaCount = data.getAlphaCount();
|
||||||
//Integer betaCount = data.getBetaCount();
|
//Integer betaCount = data.getBetaCount();
|
||||||
Map<Integer, Integer> distCellsMapAlphaKey = data.getDistCellsMapAlphaKey();
|
Map<String, String> distCellsMapAlphaKey = data.getDistCellsMapAlphaKey();
|
||||||
Set<Vertex> alphas = new HashSet<>();
|
Set<Vertex> alphas = new HashSet<>();
|
||||||
Set<Vertex> betas = new HashSet<>();
|
Set<Vertex> betas = new HashSet<>();
|
||||||
for(Vertex v: graph.vertexSet()) {
|
for(Vertex v: graph.vertexSet()) {
|
||||||
@@ -228,17 +231,14 @@ public class Simulator implements GraphModificationFunctions {
|
|||||||
int trueCount = 0;
|
int trueCount = 0;
|
||||||
int falseCount = 0;
|
int falseCount = 0;
|
||||||
boolean check;
|
boolean check;
|
||||||
Map<Integer, Integer> matchMap = new HashMap<>();
|
Map<String, String> matchMap = new HashMap<>();
|
||||||
while(weightIter.hasNext()) {
|
while(weightIter.hasNext()) {
|
||||||
e = weightIter.next();
|
e = weightIter.next();
|
||||||
Vertex source = graph.getEdgeSource(e);
|
Vertex source = graph.getEdgeSource(e);
|
||||||
Vertex target = graph.getEdgeTarget(e);
|
Vertex target = graph.getEdgeTarget(e);
|
||||||
//Integer source = graph.getEdgeSource(e);
|
|
||||||
//Integer target = graph.getEdgeTarget(e);
|
|
||||||
//The match map is all matches found, not just true matches!
|
//The match map is all matches found, not just true matches!
|
||||||
matchMap.put(source.getSequence(), target.getSequence());
|
matchMap.put(source.getSequence(), target.getSequence());
|
||||||
check = target.getSequence().equals(distCellsMapAlphaKey.get(source.getSequence()));
|
check = target.getSequence().equals(distCellsMapAlphaKey.get(source.getSequence()));
|
||||||
//check = plateVtoBMap.get(target).equals(distCellsMapAlphaKey.get(plateVtoAMap.get(source)));
|
|
||||||
if(check) {
|
if(check) {
|
||||||
trueCount++;
|
trueCount++;
|
||||||
}
|
}
|
||||||
@@ -247,11 +247,11 @@ public class Simulator implements GraphModificationFunctions {
|
|||||||
}
|
}
|
||||||
List<String> result = new ArrayList<>();
|
List<String> result = new ArrayList<>();
|
||||||
//alpha sequence
|
//alpha sequence
|
||||||
result.add(source.getSequence().toString());
|
result.add(source.getSequence());
|
||||||
//alpha well count
|
//alpha well count
|
||||||
result.add(source.getOccupancy().toString());
|
result.add(source.getOccupancy().toString());
|
||||||
//beta sequence
|
//beta sequence
|
||||||
result.add(target.getSequence().toString());
|
result.add(target.getSequence());
|
||||||
//beta well count
|
//beta well count
|
||||||
result.add(target.getOccupancy().toString());
|
result.add(target.getOccupancy().toString());
|
||||||
//overlap count
|
//overlap count
|
||||||
@@ -291,31 +291,45 @@ public class Simulator implements GraphModificationFunctions {
|
|||||||
populationsStringBuilder.append(wellPopulations[i].toString());
|
populationsStringBuilder.append(wellPopulations[i].toString());
|
||||||
}
|
}
|
||||||
String wellPopulationsString = populationsStringBuilder.toString();
|
String wellPopulationsString = populationsStringBuilder.toString();
|
||||||
|
//graph generation time
|
||||||
|
Duration graphTime = data.getTime();
|
||||||
|
//MWM run time
|
||||||
|
Duration pairingTime = Duration.between(start, stop);
|
||||||
//total simulation time
|
//total simulation time
|
||||||
Duration time = Duration.between(start, stop);
|
Duration totalTime = graphTime.plus(pairingTime);
|
||||||
time = time.plus(data.getTime());
|
|
||||||
|
|
||||||
Map<String, String> metadata = new LinkedHashMap<>();
|
Map<String, String> metadata = new LinkedHashMap<>();
|
||||||
metadata.put("sample plate filename", data.getSourceFilename());
|
metadata.put("sample plate filename", data.getSourceFilename());
|
||||||
metadata.put("graph filename", dataFilename);
|
metadata.put("graph filename", dataFilename);
|
||||||
metadata.put("algorithm type", algoType);
|
metadata.put("MWM algorithm type", algoType);
|
||||||
metadata.put("matching weight", totalMatchingWeight.toString());
|
metadata.put("matching weight", totalMatchingWeight.toString());
|
||||||
metadata.put("well populations", wellPopulationsString);
|
metadata.put("well populations", wellPopulationsString);
|
||||||
metadata.put("total alphas on plate", data.getAlphaCount().toString());
|
metadata.put("sequence read depth", data.getReadDepth().toString());
|
||||||
metadata.put("total betas on plate", data.getBetaCount().toString());
|
metadata.put("sequence read error rate", data.getReadErrorRate().toString());
|
||||||
|
metadata.put("read error collision rate", data.getErrorCollisionRate().toString());
|
||||||
|
metadata.put("real sequence collision rate", data.getRealSequenceCollisionRate().toString());
|
||||||
|
metadata.put("total alphas read from plate", data.getAlphaCount().toString());
|
||||||
|
metadata.put("total betas read from plate", data.getBetaCount().toString());
|
||||||
|
//HARD CODED, PARAMETERIZE LATER
|
||||||
|
metadata.put("pre-filter sequences present in all wells", "true");
|
||||||
|
//HARD CODED, PARAMETERIZE LATER
|
||||||
|
metadata.put("pre-filter sequences based on occupancy/read count discrepancy", "true");
|
||||||
metadata.put("alphas in graph (after pre-filtering)", graphAlphaCount.toString());
|
metadata.put("alphas in graph (after pre-filtering)", graphAlphaCount.toString());
|
||||||
metadata.put("betas in graph (after pre-filtering)", graphBetaCount.toString());
|
metadata.put("betas in graph (after pre-filtering)", graphBetaCount.toString());
|
||||||
metadata.put("high overlap threshold", highThreshold.toString());
|
metadata.put("high overlap threshold for pairing", highThreshold.toString());
|
||||||
metadata.put("low overlap threshold", lowThreshold.toString());
|
metadata.put("low overlap threshold for pairing", lowThreshold.toString());
|
||||||
metadata.put("minimum overlap percent", minOverlapPercent.toString());
|
metadata.put("minimum overlap percent for pairing", minOverlapPercent.toString());
|
||||||
metadata.put("maximum occupancy difference", maxOccupancyDifference.toString());
|
metadata.put("maximum occupancy difference for pairing", maxOccupancyDifference.toString());
|
||||||
metadata.put("pairing attempt rate", attemptRateTrunc.toString());
|
metadata.put("pairing attempt rate", attemptRateTrunc.toString());
|
||||||
metadata.put("correct pairing count", Integer.toString(trueCount));
|
metadata.put("correct pairing count", Integer.toString(trueCount));
|
||||||
metadata.put("incorrect pairing count", Integer.toString(falseCount));
|
metadata.put("incorrect pairing count", Integer.toString(falseCount));
|
||||||
metadata.put("pairing error rate", pairingErrorRateTrunc.toString());
|
metadata.put("pairing error rate", pairingErrorRateTrunc.toString());
|
||||||
metadata.put("simulation time (seconds)", nf.format(time.toSeconds()));
|
metadata.put("time to generate graph (seconds)", nf.format(graphTime.toSeconds()));
|
||||||
|
metadata.put("time to pair sequences (seconds)",nf.format(pairingTime.toSeconds()));
|
||||||
|
metadata.put("total simulation time (seconds)", nf.format(totalTime.toSeconds()));
|
||||||
//create MatchingResult object
|
//create MatchingResult object
|
||||||
MatchingResult output = new MatchingResult(metadata, header, allResults, matchMap, time);
|
MatchingResult output = new MatchingResult(metadata, header, allResults, matchMap);
|
||||||
if(verbose){
|
if(verbose){
|
||||||
for(String s: output.getComments()){
|
for(String s: output.getComments()){
|
||||||
System.out.println(s);
|
System.out.println(s);
|
||||||
@@ -637,81 +651,77 @@ public class Simulator implements GraphModificationFunctions {
|
|||||||
// }
|
// }
|
||||||
|
|
||||||
//Remove sequences based on occupancy
|
//Remove sequences based on occupancy
|
||||||
public static void filterByOccupancyThresholds(Map<Integer, Integer> wellMap, int low, int high){
|
public static void filterByOccupancyThresholds(Map<String, SequenceRecord> wellMap, int low, int high){
|
||||||
List<Integer> noise = new ArrayList<>();
|
List<String> noise = new ArrayList<>();
|
||||||
for(Integer k: wellMap.keySet()){
|
for(String k: wellMap.keySet()){
|
||||||
if((wellMap.get(k) > high) || (wellMap.get(k) < low)){
|
if((wellMap.get(k).getOccupancy() > high) || (wellMap.get(k).getOccupancy() < low)){
|
||||||
noise.add(k);
|
noise.add(k);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
for(Integer k: noise) {
|
for(String k: noise) {
|
||||||
wellMap.remove(k);
|
wellMap.remove(k);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
//Counts the well occupancy of the row peptides and column peptides into given maps, and
|
public static void filterByOccupancyAndReadCount(Map<String, SequenceRecord> sequences, int readDepth) {
|
||||||
//fills weights in the given 2D array
|
List<String> noise = new ArrayList<>();
|
||||||
private static void countSequencesAndFillMatrix(Plate samplePlate,
|
for(String k : sequences.keySet()){
|
||||||
Map<Integer,Integer> allRowSequences,
|
//occupancy times read depth should be more than half the sequence read count if the read error rate is low
|
||||||
Map<Integer,Integer> allColumnSequences,
|
Integer threshold = (sequences.get(k).getOccupancy() * readDepth) / 2;
|
||||||
Map<Integer,Integer> rowSequenceToVertexMap,
|
if(sequences.get(k).getReadCount() < threshold) {
|
||||||
Map<Integer,Integer> columnSequenceToVertexMap,
|
noise.add(k);
|
||||||
int[] rowSequenceIndices,
|
|
||||||
int[] colSequenceIndices,
|
|
||||||
Map<Integer, Integer> rowSequenceCounts,
|
|
||||||
Map<Integer,Integer> columnSequenceCounts,
|
|
||||||
double[][] weights){
|
|
||||||
Map<Integer, Integer> wellNRowSequences = null;
|
|
||||||
Map<Integer, Integer> wellNColumnSequences = null;
|
|
||||||
int vertexStartValue = rowSequenceToVertexMap.size();
|
|
||||||
int numWells = samplePlate.getSize();
|
|
||||||
for (int n = 0; n < numWells; n++) {
|
|
||||||
wellNRowSequences = samplePlate.assayWellsSequenceS(n, rowSequenceIndices);
|
|
||||||
for (Integer a : wellNRowSequences.keySet()) {
|
|
||||||
if(allRowSequences.containsKey(a)){
|
|
||||||
rowSequenceCounts.merge(a, 1, (oldValue, newValue) -> oldValue + newValue);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
wellNColumnSequences = samplePlate.assayWellsSequenceS(n, colSequenceIndices);
|
|
||||||
for (Integer b : wellNColumnSequences.keySet()) {
|
|
||||||
if(allColumnSequences.containsKey(b)){
|
|
||||||
columnSequenceCounts.merge(b, 1, (oldValue, newValue) -> oldValue + newValue);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
for (Integer i : wellNRowSequences.keySet()) {
|
|
||||||
if(allRowSequences.containsKey(i)){
|
|
||||||
for (Integer j : wellNColumnSequences.keySet()) {
|
|
||||||
if(allColumnSequences.containsKey(j)){
|
|
||||||
weights[rowSequenceToVertexMap.get(i)][columnSequenceToVertexMap.get(j) - vertexStartValue] += 1.0;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
for(String k : noise) {
|
||||||
|
sequences.remove(k);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
private static Map<String, String> makeSequenceToSequenceMap(List<String[]> cells, int keySequenceIndex,
|
||||||
}
|
|
||||||
|
|
||||||
private static Map<Integer, Integer> makeSequenceToSequenceMap(List<Integer[]> cells, int keySequenceIndex,
|
|
||||||
int valueSequenceIndex){
|
int valueSequenceIndex){
|
||||||
Map<Integer, Integer> keySequenceToValueSequenceMap = new HashMap<>();
|
Map<String, String> keySequenceToValueSequenceMap = new HashMap<>();
|
||||||
for (Integer[] cell : cells) {
|
for (String[] cell : cells) {
|
||||||
keySequenceToValueSequenceMap.put(cell[keySequenceIndex], cell[valueSequenceIndex]);
|
keySequenceToValueSequenceMap.put(cell[keySequenceIndex], cell[valueSequenceIndex]);
|
||||||
}
|
}
|
||||||
return keySequenceToValueSequenceMap;
|
return keySequenceToValueSequenceMap;
|
||||||
}
|
}
|
||||||
|
|
||||||
private static Map<Integer, Integer> makeVertexToSequenceMap(Map<Integer, Integer> sequences, Integer startValue) {
|
private static Map<Integer, String> makeVertexToSequenceMap(Map<String, SequenceRecord> sequences, Integer startValue) {
|
||||||
Map<Integer, Integer> map = new LinkedHashMap<>(); //LinkedHashMap to preserve order of entry
|
Map<Integer, String> map = new LinkedHashMap<>(); //LinkedHashMap to preserve order of entry
|
||||||
Integer index = startValue; //is this necessary? I don't think I use this.
|
Integer index = startValue;
|
||||||
for (Integer k: sequences.keySet()) {
|
for (String k: sequences.keySet()) {
|
||||||
map.put(index, k);
|
map.put(index, k);
|
||||||
index++;
|
index++;
|
||||||
}
|
}
|
||||||
return map;
|
return map;
|
||||||
}
|
}
|
||||||
|
|
||||||
private static Map<Integer, Integer> invertVertexMap(Map<Integer, Integer> map) {
|
private static Map<String, Integer> makeSequenceToVertexMap(Map<String, SequenceRecord> sequences, Integer startValue) {
|
||||||
Map<Integer, Integer> inverse = new HashMap<>();
|
Map<String, Integer> map = new LinkedHashMap<>(); //LinkedHashMap to preserve order of entry
|
||||||
|
Integer index = startValue;
|
||||||
|
for (String k: sequences.keySet()) {
|
||||||
|
map.put(k, index);
|
||||||
|
index++;
|
||||||
|
}
|
||||||
|
return map;
|
||||||
|
}
|
||||||
|
|
||||||
|
private static void fillAdjacencyMatrix(double[][] weights, Integer vertexOffsetValue, Map<String, SequenceRecord> rowSequences,
|
||||||
|
Map<String, SequenceRecord> columnSequences, Map<String, Integer> rowToVertexMap,
|
||||||
|
Map<String, Integer> columnToVertexMap) {
|
||||||
|
for (String rowSeq: rowSequences.keySet()) {
|
||||||
|
for (Integer well: rowSequences.get(rowSeq).getWells()) {
|
||||||
|
for (String colSeq: columnSequences.keySet()) {
|
||||||
|
if (columnSequences.get(colSeq).isInWell(well)) {
|
||||||
|
weights[rowToVertexMap.get(rowSeq)][columnToVertexMap.get(colSeq) - vertexOffsetValue] += 1.0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private static Map<String, Integer> invertVertexMap(Map<Integer, String> map) {
|
||||||
|
Map<String, Integer> inverse = new HashMap<>();
|
||||||
for (Integer k : map.keySet()) {
|
for (Integer k : map.keySet()) {
|
||||||
inverse.put(map.get(k), k);
|
inverse.put(map.get(k), k);
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,63 +1,45 @@
|
|||||||
|
import org.jheaps.AddressableHeap;
|
||||||
|
|
||||||
import java.io.Serializable;
|
import java.io.Serializable;
|
||||||
|
import java.util.Map;
|
||||||
|
|
||||||
public class Vertex implements Serializable, Comparable<Vertex> {
|
public class Vertex implements Serializable, Comparable<Vertex> {
|
||||||
private SequenceType type;
|
private SequenceRecord record;
|
||||||
private Integer vertexLabel;
|
private Integer vertexLabel;
|
||||||
private Integer sequence;
|
private Double potential;
|
||||||
private Integer occupancy;
|
private AddressableHeap queue;
|
||||||
|
|
||||||
public Vertex(Integer vertexLabel) {
|
public Vertex(SequenceRecord record, Integer vertexLabel) {
|
||||||
|
this.record = record;
|
||||||
this.vertexLabel = vertexLabel;
|
this.vertexLabel = vertexLabel;
|
||||||
}
|
}
|
||||||
public Vertex(String vertexLabel) {
|
|
||||||
this.vertexLabel = Integer.parseInt((vertexLabel));
|
|
||||||
}
|
|
||||||
|
|
||||||
public Vertex(SequenceType type, Integer sequence, Integer occupancy, Integer vertexLabel) {
|
public SequenceRecord getRecord() { return record; }
|
||||||
this.type = type;
|
|
||||||
this.vertexLabel = vertexLabel;
|
|
||||||
this.sequence = sequence;
|
|
||||||
this.occupancy = occupancy;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
public SequenceType getType() { return record.getSequenceType(); }
|
||||||
public SequenceType getType() {
|
|
||||||
return type;
|
|
||||||
}
|
|
||||||
|
|
||||||
public void setType(String type) {
|
|
||||||
this.type = SequenceType.valueOf(type);
|
|
||||||
}
|
|
||||||
|
|
||||||
public Integer getVertexLabel() {
|
public Integer getVertexLabel() {
|
||||||
return vertexLabel;
|
return vertexLabel;
|
||||||
}
|
}
|
||||||
|
|
||||||
public void setVertexLabel(String label) {
|
public String getSequence() {
|
||||||
this.vertexLabel = Integer.parseInt(label);
|
return record.getSequence();
|
||||||
}
|
|
||||||
|
|
||||||
public Integer getSequence() {
|
|
||||||
|
|
||||||
return sequence;
|
|
||||||
}
|
|
||||||
|
|
||||||
public void setSequence(String sequence) {
|
|
||||||
this.sequence = Integer.parseInt(sequence);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public Integer getOccupancy() {
|
public Integer getOccupancy() {
|
||||||
return occupancy;
|
return record.getOccupancy();
|
||||||
}
|
}
|
||||||
|
|
||||||
public void setOccupancy(String occupancy) {
|
public Integer getReadCount() { return record.getReadCount(); }
|
||||||
this.occupancy = Integer.parseInt(occupancy);
|
|
||||||
}
|
public Integer getReadCount(Integer well) { return record.getReadCount(well); }
|
||||||
|
|
||||||
|
public Map<Integer, Integer> getWellOccupancies() { return record.getWellOccupancies(); }
|
||||||
|
|
||||||
@Override //adapted from JGraphT example code
|
@Override //adapted from JGraphT example code
|
||||||
public int hashCode()
|
public int hashCode()
|
||||||
{
|
{
|
||||||
return (sequence == null) ? 0 : sequence.hashCode();
|
return (this.getSequence() == null) ? 0 : this.getSequence().hashCode();
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override //adapted from JGraphT example code
|
@Override //adapted from JGraphT example code
|
||||||
@@ -70,22 +52,21 @@ public class Vertex implements Serializable, Comparable<Vertex> {
|
|||||||
if (getClass() != obj.getClass())
|
if (getClass() != obj.getClass())
|
||||||
return false;
|
return false;
|
||||||
Vertex other = (Vertex) obj;
|
Vertex other = (Vertex) obj;
|
||||||
if (sequence == null) {
|
if (this.getSequence() == null) {
|
||||||
return other.sequence == null;
|
return other.getSequence() == null;
|
||||||
} else {
|
} else {
|
||||||
return sequence.equals(other.sequence);
|
return this.getSequence().equals(other.getSequence());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@Override //adapted from JGraphT example code
|
@Override //adapted from JGraphT example code
|
||||||
public String toString()
|
public String toString()
|
||||||
{
|
{
|
||||||
StringBuilder sb = new StringBuilder();
|
StringBuilder sb = new StringBuilder();
|
||||||
sb.append("(").append(vertexLabel)
|
sb.append("(").append(vertexLabel)
|
||||||
.append(", Type: ").append(type.name())
|
.append(", Type: ").append(this.getType().name())
|
||||||
.append(", Sequence: ").append(sequence)
|
.append(", Sequence: ").append(this.getSequence())
|
||||||
.append(", Occupancy: ").append(occupancy).append(")");
|
.append(", Occupancy: ").append(this.getOccupancy()).append(")");
|
||||||
return sb.toString();
|
return sb.toString();
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -93,5 +74,4 @@ public class Vertex implements Serializable, Comparable<Vertex> {
|
|||||||
public int compareTo(Vertex other) {
|
public int compareTo(Vertex other) {
|
||||||
return this.vertexLabel - other.getVertexLabel();
|
return this.vertexLabel - other.getVertexLabel();
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|||||||
Reference in New Issue
Block a user