28 Commits

Author SHA1 Message Date
eugenefischer
16daf02dd6 Merge branch 'Dev_Vertex'
# Conflicts:
#	src/main/java/GraphModificationFunctions.java
#	src/main/java/GraphWithMapData.java
#	src/main/java/Simulator.java
#	src/main/java/Vertex.java
2022-09-25 18:33:26 -05:00
eugenefischer
04a077da2e update Readme 2022-09-25 18:24:12 -05:00
eugenefischer
740835f814 fix typo 2022-09-25 17:47:07 -05:00
eugenefischer
8a77d53f1f Output sequence counts before and after pre-filtering (currently pre-filtering only sequences present in all wells) 2022-09-25 17:20:50 -05:00
eugenefischer
58fa140ee5 add comments 2022-09-25 16:10:17 -05:00
eugenefischer
475bbf3107 Sort vertex lists by vertex label before making adjacency matrix 2022-09-25 15:54:28 -05:00
eugenefischer
4f2fa4cbbe Pre-filter saturating sequences only. Retaining singletons seems to improve matching accuracy in high sample rate test (well populations 10% of total cell sample size) 2022-09-25 15:19:56 -05:00
eugenefischer
58d418e44b Pre-filter saturating sequences only. Retaining singletons seems to improve matching accuracy in high sample rate test (well populations 10% of total cell sample size) 2022-09-25 15:06:46 -05:00
eugenefischer
1971a96467 Remove pre-filtering of singleton and saturating sequences 2022-09-25 14:55:43 -05:00
eugenefischer
e699795521 Revert "by-hand merge of needed code from custom vertex branch"
This reverts commit 29b844afd2.
2022-09-25 14:34:31 -05:00
eugenefischer
bd6d010b0b Revert "update TODO"
This reverts commit a054c0c20a.
2022-09-25 14:34:31 -05:00
eugenefischer
61d1eb3eb1 Revert "Reword output message"
This reverts commit 63317f2aa0.
2022-09-25 14:34:31 -05:00
eugenefischer
cb41b45204 Revert "Reword option menu item"
This reverts commit 06e72314b0.
2022-09-25 14:34:31 -05:00
eugenefischer
a84d2e1bfe Revert "Add comment on map data encodng"
This reverts commit 73c83bf35d.
2022-09-25 14:34:31 -05:00
eugenefischer
7b61d2c0d7 Revert "update version number"
This reverts commit e4e5a1f979.
2022-09-25 14:34:31 -05:00
eugenefischer
56454417c0 Revert "Restore pre-filtering of singleton and saturating sequences"
This reverts commit 5c03909a11.
2022-09-25 14:34:31 -05:00
eugenefischer
8ee1c5903e Merge branch 'master' into Dev_Vertex
# Conflicts:
#	src/main/java/GraphMLFileReader.java
#	src/main/java/InteractiveInterface.java
#	src/main/java/Simulator.java
2022-09-25 14:18:56 -05:00
eugenefischer
dea4972927 remove prefiltering of singletons and saturating sequences 2022-09-21 16:09:08 -05:00
eugenefischer
9ae38bf247 Fix bug in correct match counter 2022-09-21 15:59:23 -05:00
817fe51708 Code cleanup 2022-02-26 09:56:46 -06:00
1ea68045ce Refactor cdr3 matching to use new Vertex class 2022-02-26 09:49:16 -06:00
75b2aa9553 testing graph attributes 2022-02-26 08:58:52 -06:00
b3dc10f287 add graph attributes to graphml writer 2022-02-26 08:15:48 -06:00
fb8d8d8785 make heap type an enum 2022-02-26 08:15:31 -06:00
ab437512e9 make Vertex serializable 2022-02-26 07:45:36 -06:00
7b03a3cce8 bugfix 2022-02-26 07:35:34 -06:00
f032d3e852 rewrite GraphML importer/exporter 2022-02-26 07:34:07 -06:00
b604b1d3cd Changing graph to use Vertex class 2022-02-26 06:19:08 -06:00
12 changed files with 101 additions and 90 deletions

View File

@@ -281,7 +281,7 @@ with different filtering options), the actual elapsed time was greater. File I/O
slightly less time than the simulation itself. Real elapsed time from start to finish was under 30 minutes.
As mentioned in the theory section, performance could be improved by implementing a more efficient algorithm for finding
the maximum weighted matching.
the maximum weight matching.
## BEHAVIOR WITH RANDOMIZED WELL POPULATIONS
@@ -347,10 +347,12 @@ roughly as though it had a constant well population equal to the plate's average
* ~~Apache Commons CSV library writes entries a row at a time~~
* _Got this working, but at the cost of a profoundly strange bug in graph occupancy filtering. Have reverted the repo until I can figure out what caused that. Given how easily Thingiverse transposes CSV matrices in R, might not even be worth fixing.
* ~~Enable GraphML output in addition to serialized object binaries, for data portability~~ DONE
* ~~Custom vertex type with attribute for sequence occupancy?~~ DONE
* ~~Custom vertex type with attribute for sequence occupancy?~~ ABANDONED
* Advantage: would eliminate the need to use maps to associate vertices with sequences, which would make the code easier to understand.
* ~~Have a branch where this is implemented, but there's a bug that broke matching. Don't currently have time to fix.~~
* Have a branch where this is implemented, but there's a bug that broke matching. Don't currently have time to fix.
* ~~Re-implement command line arguments, to enable scripting and statistical simulation studies~~ DONE
* ~~Implement custom Vertex class to simplify code and make it easier to implement different MWM algorithms~~ DONE
* This also seems to be faster when using the same algorithm than the version with lots of maps, which is a nice bonus!
* Re-implement CDR1 matching method
* Implement Duan and Su's maximum weight matching algorithm
* Add controllable algorithm-type parameter?
@@ -361,7 +363,7 @@ roughly as though it had a constant well population equal to the plate's average
* Implement Vose's alias method for arbitrary statistical distributions of cells
* Should probably refactor to use apache commons rng for this
* Use commons JCS for caching
* Enable post-filtering instead of pre-filtering. Pre-filtering of things like singleton sequences or saturating-occupancy sequences reduces graph size, but could conceivably reduce pairing accuracy by throwing away data. While these sequences have very little signal, it would be interesting to compare unfiltered results to filtered results. This would require a much, much faster MWM algorithm, though, to handle the much larger graphs. Possible one of the linear-time approximation algorithms.
* Parameterize pre-filtering. Currently, sequences present in all wells are filtered out before constructing the graph, which massively reduces graph size. But, ideally, no pre-filtering would be necessary.
## CITATIONS

View File

@@ -13,10 +13,10 @@ public class BiGpairSEQ {
private static boolean cacheCells = false;
private static boolean cachePlate = false;
private static boolean cacheGraph = false;
private static String priorityQueueHeapType = "FIBONACCI";
private static HeapType priorityQueueHeapType = HeapType.FIBONACCI;
private static boolean outputBinary = true;
private static boolean outputGraphML = false;
private static final String version = "version 3.0";
private static final String version = "version 2.0";
public static void main(String[] args) {
if (args.length == 0) {
@@ -157,15 +157,15 @@ public class BiGpairSEQ {
}
public static String getPriorityQueueHeapType() {
return priorityQueueHeapType;
return priorityQueueHeapType.name();
}
public static void setPairingHeap() {
priorityQueueHeapType = "PAIRING";
priorityQueueHeapType = HeapType.PAIRING;
}
public static void setFibonacciHeap() {
priorityQueueHeapType = "FIBONACCI";
priorityQueueHeapType = HeapType.FIBONACCI;
}
public static boolean outputBinary() {return outputBinary;}

View File

@@ -13,8 +13,12 @@ public class CellSample {
List<Integer> numbersCDR3 = new ArrayList<>();
List<Integer> numbersCDR1 = new ArrayList<>();
Integer numDistCDR3s = 2 * numDistinctCells + 1;
//Assign consecutive integers for each CDR3. This ensures they are all unique.
IntStream.range(1, numDistCDR3s + 1).forEach(i -> numbersCDR3.add(i));
//After all CDR3s are assigned, start assigning consecutive integers to CDR1s
//There will usually be fewer integers in the CDR1 list, which will allow repeats below
IntStream.range(numDistCDR3s + 1, numDistCDR3s + 1 + (numDistCDR3s / cdr1Freq) + 1).forEach(i -> numbersCDR1.add(i));
//randomize the order of the numbers in the lists
Collections.shuffle(numbersCDR3);
Collections.shuffle(numbersCDR1);
@@ -22,11 +26,15 @@ public class CellSample {
//two CDR3s, and two CDR1s. First two values are CDR3s (alpha, beta), second two are CDR1s (alpha, beta)
List<Integer[]> distinctCells = new ArrayList<>();
for(int i = 0; i < numbersCDR3.size() - 1; i = i + 2){
//Go through entire CDR3 list once, make pairs of alphas and betas
Integer tmpCDR3a = numbersCDR3.get(i);
Integer tmpCDR3b = numbersCDR3.get(i+1);
//Go through (likely shorter) CDR1 list as many times as necessary, make pairs of alphas and betas
Integer tmpCDR1a = numbersCDR1.get(i % numbersCDR1.size());
Integer tmpCDR1b = numbersCDR1.get((i+1) % numbersCDR1.size());
//Make the array representing the cell
Integer[] tmp = {tmpCDR3a, tmpCDR3b, tmpCDR1a, tmpCDR1b};
//Add the cell to the list of distinct cells
distinctCells.add(tmp);
}
this.cells = distinctCells;

View File

@@ -3,9 +3,8 @@ import org.jgrapht.graph.SimpleWeightedGraph;
import org.jgrapht.nio.Attribute;
import org.jgrapht.nio.AttributeType;
import org.jgrapht.nio.DefaultAttribute;
import org.jgrapht.nio.dot.DOTExporter;
import org.jgrapht.nio.graphml.GraphMLExporter;
import org.jgrapht.nio.graphml.GraphMLExporter.AttributeCategory;
import org.w3c.dom.Attr;
import java.io.BufferedWriter;
import java.io.IOException;
@@ -13,14 +12,14 @@ import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.util.HashMap;
import java.util.LinkedHashMap;
import java.util.Map;
public class GraphMLFileWriter {
String filename;
SimpleWeightedGraph graph;
GraphWithMapData data;
Map<String, Attribute> graphAttributes;
public GraphMLFileWriter(String filename, GraphWithMapData data) {
if(!filename.matches(".*\\.graphml")){
@@ -28,61 +27,52 @@ public class GraphMLFileWriter {
}
this.filename = filename;
this.data = data;
this.graph = data.getGraph();
graphAttributes = createGraphAttributes();
}
public GraphMLFileWriter(String filename, SimpleWeightedGraph<Vertex, DefaultWeightedEdge> graph) {
if(!filename.matches(".*\\.graphml")){
filename = filename + ".graphml";
}
this.filename = filename;
this.graph = graph;
}
private Map<String, Attribute> createGraphAttributes(){
Map<String, Attribute> ga = new HashMap<>();
//Sample plate filename
ga.put("sample plate filename", DefaultAttribute.createAttribute(data.getSourceFilename()));
// Number of wells
ga.put("well count", DefaultAttribute.createAttribute(data.getNumWells().toString()));
//Well populations
Integer[] wellPopulations = data.getWellPopulations();
StringBuilder populationsStringBuilder = new StringBuilder();
populationsStringBuilder.append(wellPopulations[0].toString());
for(int i = 1; i < wellPopulations.length; i++){
populationsStringBuilder.append(", ");
populationsStringBuilder.append(wellPopulations[i].toString());
}
String wellPopulationsString = populationsStringBuilder.toString();
ga.put("well populations", DefaultAttribute.createAttribute(wellPopulationsString));
return ga;
}
// public void writeGraphToFile() {
// try(BufferedWriter writer = Files.newBufferedWriter(Path.of(filename), StandardOpenOption.CREATE_NEW);
// ){
// GraphMLExporter<SimpleWeightedGraph, BufferedWriter> exporter = new GraphMLExporter<>();
// exporter.exportGraph(graph, writer);
// } catch(IOException ex){
// System.out.println("Could not make new file named "+filename);
// System.err.println(ex);
// }
// }
public void writeGraphToFile() {
SimpleWeightedGraph graph = data.getGraph();
Map<Integer, Integer> vertexToAlphaMap = data.getPlateVtoAMap();
Map<Integer, Integer> vertexToBetaMap = data.getPlateVtoBMap();
Map<Integer, Integer> alphaOccs = data.getAlphaWellCounts();
Map<Integer, Integer> betaOccs = data.getBetaWellCounts();
try(BufferedWriter writer = Files.newBufferedWriter(Path.of(filename), StandardOpenOption.CREATE_NEW);
){
//create exporter. Let the vertex labels be the unique ids for the vertices
GraphMLExporter<Vertex, SimpleWeightedGraph<Vertex, DefaultWeightedEdge>> exporter = new GraphMLExporter<>(v -> v.getVertexLabel().toString());
GraphMLExporter<Integer, SimpleWeightedGraph<Vertex, DefaultWeightedEdge>> exporter = new GraphMLExporter<>(v -> v.toString());
//set to export weights
exporter.setExportEdgeWeights(true);
//Set graph attributes
exporter.setGraphAttributeProvider( () -> graphAttributes);
//set type, sequence, and occupancy attributes for each vertex
exporter.setVertexAttributeProvider( v -> {
Map<String, Attribute> attributes = new HashMap<>();
attributes.put("type", DefaultAttribute.createAttribute(v.getType().name()));
attributes.put("sequence", DefaultAttribute.createAttribute(v.getSequence()));
attributes.put("occupancy", DefaultAttribute.createAttribute(v.getOccupancy()));
if(vertexToAlphaMap.containsKey(v)) {
attributes.put("type", DefaultAttribute.createAttribute("CDR3 Alpha"));
attributes.put("sequence", DefaultAttribute.createAttribute(vertexToAlphaMap.get(v)));
attributes.put("occupancy", DefaultAttribute.createAttribute(
alphaOccs.get(vertexToAlphaMap.get(v))));
}
else if(vertexToBetaMap.containsKey(v)) {
attributes.put("type", DefaultAttribute.createAttribute("CDR3 Beta"));
attributes.put("sequence", DefaultAttribute.createAttribute(vertexToBetaMap.get(v)));
attributes.put("occupancy", DefaultAttribute.createAttribute(
betaOccs.get(vertexToBetaMap.get(v))));
}
return attributes;
});
//register the attributes
for(String s : graphAttributes.keySet()) {
exporter.registerAttribute(s, AttributeCategory.GRAPH, AttributeType.STRING);
}
exporter.registerAttribute("type", AttributeCategory.NODE, AttributeType.STRING);
exporter.registerAttribute("sequence", AttributeCategory.NODE, AttributeType.STRING);
exporter.registerAttribute("occupancy", AttributeCategory.NODE, AttributeType.STRING);
exporter.registerAttribute("type", GraphMLExporter.AttributeCategory.NODE, AttributeType.STRING);
exporter.registerAttribute("sequence", GraphMLExporter.AttributeCategory.NODE, AttributeType.STRING);
exporter.registerAttribute("occupancy", GraphMLExporter.AttributeCategory.NODE, AttributeType.STRING);
//export the graph
exporter.exportGraph(graph, writer);
} catch(IOException ex){
@@ -91,3 +81,4 @@ public class GraphMLFileWriter {
}
}
}

View File

@@ -8,7 +8,7 @@ import java.util.Map;
public interface GraphModificationFunctions {
//remove over- and under-weight edges
//remove over- and under-weight edges, return removed edges
static Map<Vertex[], Integer> filterByOverlapThresholds(SimpleWeightedGraph<Vertex, DefaultWeightedEdge> graph,
int low, int high, boolean saveEdges) {
Map<Vertex[], Integer> removedEdges = new HashMap<>();
@@ -35,7 +35,7 @@ public interface GraphModificationFunctions {
return removedEdges;
}
//Remove edges for pairs with large occupancy discrepancy
//Remove edges for pairs with large occupancy discrepancy, return removed edges
static Map<Vertex[], Integer> filterByRelativeOccupancy(SimpleWeightedGraph<Vertex, DefaultWeightedEdge> graph,
Integer maxOccupancyDifference, boolean saveEdges) {
Map<Vertex[], Integer> removedEdges = new HashMap<>();
@@ -63,7 +63,7 @@ public interface GraphModificationFunctions {
return removedEdges;
}
//Remove edges for pairs where overlap size is significantly lower than the well occupancy
//Remove edges for pairs where overlap size is significantly lower than the well occupancy, return removed edges
static Map<Vertex[], Integer> filterByOverlapPercent(SimpleWeightedGraph<Vertex, DefaultWeightedEdge> graph,
Integer minOverlapPercent,
boolean saveEdges) {

View File

@@ -25,9 +25,9 @@ public class GraphWithMapData implements java.io.Serializable {
private final Duration time;
public GraphWithMapData(SimpleWeightedGraph graph, Integer numWells, Integer[] wellConcentrations,
Map<Integer, Integer> distCellsMapAlphaKey, Duration time){
Map<Integer, Integer> distCellsMapAlphaKey, Integer alphaCount, Integer betaCount, Duration time){
// Map<Integer, Integer> plateVtoAMap, Integer alphaCount, Integer betaCount,
// Map<Integer, Integer> plateVtoAMap,
// Map<Integer,Integer> plateVtoBMap, Map<Integer, Integer> plateAtoVMap,
// Map<Integer, Integer> plateBtoVMap, Map<Integer, Integer> alphaWellCounts,
// Map<Integer, Integer> betaWellCounts,) {
@@ -58,13 +58,13 @@ public class GraphWithMapData implements java.io.Serializable {
return wellPopulations;
}
// public Integer getAlphaCount() {
// return alphaCount;
// }
//
// public Integer getBetaCount() {
// return betaCount;
// }
public Integer getAlphaCount() {
return alphaCount;
}
public Integer getBetaCount() {
return betaCount;
}
public Map<Integer, Integer> getDistCellsMapAlphaKey() {
return distCellsMapAlphaKey;

View File

@@ -0,0 +1,4 @@
public enum HeapType {
FIBONACCI,
PAIRING
}

View File

@@ -258,7 +258,7 @@ public class InteractiveInterface {
cellFile = sc.next();
System.out.print("\nPlease enter name of an existing sample plate file: ");
plateFile = sc.next();
System.out.println("\nThe graph and occupancy data will be written to a file.");
System.out.println("\nThe graph and occupancy data will be written to a serialized binary file.");
System.out.print("Please enter a name for the output file: ");
filename = sc.next();
} catch (InputMismatchException ex) {
@@ -504,7 +504,7 @@ public class InteractiveInterface {
System.out.println("2) Turn " + getOnOff(!BiGpairSEQ.cachePlate()) + " plate file caching");
System.out.println("3) Turn " + getOnOff(!BiGpairSEQ.cacheGraph()) + " graph/data file caching");
System.out.println("4) Turn " + getOnOff(!BiGpairSEQ.outputBinary()) + " serialized binary graph output");
System.out.println("5) Turn " + getOnOff(!BiGpairSEQ.outputGraphML()) + " GraphML graph output (for data portability to other programs)");
System.out.println("5) Turn " + getOnOff(!BiGpairSEQ.outputGraphML()) + " GraphML graph output");
System.out.println("6) Maximum weight matching algorithm options");
System.out.println("0) Return to main menu");
try {

View File

@@ -158,9 +158,9 @@ public class Plate {
//returns a map of the counts of the sequence at cell index sIndex, in a range of wells
public Map<Integer, Integer> assayWellsSequenceS(int start, int end, int... sIndices) {
Map<Integer,Integer> assay = new HashMap<>();
for(int pIndex: sIndices){
for(int sIndex: sIndices){
for(int i = start; i < end; i++){
countSequences(assay, wells.get(i), pIndex);
countSequences(assay, wells.get(i), sIndex);
}
}
return assay;
@@ -169,6 +169,7 @@ public class Plate {
private void countSequences(Map<Integer, Integer> wellMap, List<Integer[]> well, int... sIndices) {
for(Integer[] cell : well) {
for(int sIndex: sIndices){
//skip dropout sequences, which have value -1
if(cell[sIndex] != -1){
wellMap.merge(cell[sIndex], 1, (oldValue, newValue) -> oldValue + newValue);
}

View File

@@ -1,8 +0,0 @@
//enum for tagging types of sequences
//Listed in order that they appear in a cell array, so ordinal() method will return correct index
public enum SequenceType {
CDR3_ALPHA,
CDR3_BETA,
CDR1_ALPHA,
CDR1_BETA
}

View File

@@ -47,10 +47,10 @@ public class Simulator implements GraphModificationFunctions {
if(verbose){System.out.println("All betas count: " + betaCount);}
if(verbose){System.out.println("Well maps made");}
if(verbose){System.out.println("Removing singleton sequences and sequences present in all wells.");}
filterByOccupancyThresholds(allAlphas, 2, numWells - 1);
filterByOccupancyThresholds(allBetas, 2, numWells - 1);
//ideally we wouldn't do any graph pre-filtering. But sequences present in all wells add a huge number of edges to the graph and don't carry any signal value
if(verbose){System.out.println("Removing sequences present in all wells.");}
filterByOccupancyThresholds(allAlphas, 1, numWells - 1);
filterByOccupancyThresholds(allBetas, 1, numWells - 1);
if(verbose){System.out.println("Sequences removed");}
int pairableAlphaCount = allAlphas.size();
if(verbose){System.out.println("Remaining alphas count: " + pairableAlphaCount);}
@@ -105,6 +105,9 @@ public class Simulator implements GraphModificationFunctions {
Vertex alphaVertex = new Vertex(SequenceType.CDR3_ALPHA, seq, alphaWellCounts.get(seq), plateAtoVMap.get(seq));
alphaVertices.add(alphaVertex);
}
//Sort to make sure the order of vertices in list matches the order of the adjacency matrix
Collections.sort(alphaVertices);
//Add ordered list of vertices to the graph
graphGenerator.first(alphaVertices);
//the list of beta vertices
//List<Integer> betaVertices = new ArrayList<>(plateVtoBMap.keySet());//This will work because LinkedHashMap preserves order of entry
@@ -113,6 +116,9 @@ public class Simulator implements GraphModificationFunctions {
Vertex betaVertex = new Vertex(SequenceType.CDR3_BETA, seq, betaWellCounts.get(seq), plateBtoVMap.get(seq));
betaVertices.add(betaVertex);
}
//Sort to make sure the order of vertices in list matches the order of the adjacency matrix
Collections.sort(betaVertices);
//Add ordered list of vertices to the graph
graphGenerator.second(betaVertices);
//use adjacency matrix of weight created previously
graphGenerator.weights(weights);
@@ -123,7 +129,7 @@ public class Simulator implements GraphModificationFunctions {
Duration time = Duration.between(start, stop);
//create GraphWithMapData object
GraphWithMapData output = new GraphWithMapData(graph, numWells, samplePlate.getPopulations(), distCellsMapAlphaKey, time);
GraphWithMapData output = new GraphWithMapData(graph, numWells, samplePlate.getPopulations(), distCellsMapAlphaKey, alphaCount, betaCount, time);
//Set source file name in graph to name of sample plate
output.setSourceFilename(samplePlate.getFilename());
//return GraphWithMapData object
@@ -152,8 +158,8 @@ public class Simulator implements GraphModificationFunctions {
betas.add(v);
}
}
Integer alphaCount = alphas.size();
Integer betaCount = betas.size();
Integer graphAlphaCount = alphas.size();
Integer graphBetaCount = betas.size();
//remove edges with weights outside given overlap thresholds, add those to removed edge list
if(verbose){System.out.println("Eliminating edges with weights outside overlap threshold values");}
@@ -173,9 +179,9 @@ public class Simulator implements GraphModificationFunctions {
if(verbose){System.out.println("Edges between vertices of with excessively different occupancy values " +
"removed");}
//Find Maximum Weighted Matching
//Find Maximum Weight Matching
//using jheaps library class PairingHeap for improved efficiency
if(verbose){System.out.println("Finding maximum weighted matching");}
if(verbose){System.out.println("Finding maximum weight matching");}
MaximumWeightBipartiteMatching maxWeightMatching;
//Use correct heap type for priority queue
String heapType = BiGpairSEQ.getPriorityQueueHeapType();
@@ -260,7 +266,7 @@ public class Simulator implements GraphModificationFunctions {
//Metadata comments for CSV file
String algoType = "LEDA book with heap: " + heapType;
int min = Math.min(alphaCount, betaCount);
int min = Math.min(graphAlphaCount, graphBetaCount);
//matching weight
BigDecimal totalMatchingWeight = maxWeightMatching.getMatchingWeight();
//rate of attempted matching
@@ -295,8 +301,10 @@ public class Simulator implements GraphModificationFunctions {
metadata.put("algorithm type", algoType);
metadata.put("matching weight", totalMatchingWeight.toString());
metadata.put("well populations", wellPopulationsString);
metadata.put("total alphas found", alphaCount.toString());
metadata.put("total betas found", betaCount.toString());
metadata.put("total alphas on plate", data.getAlphaCount().toString());
metadata.put("total betas on plate", data.getBetaCount().toString());
metadata.put("alphas in graph (after pre-filtering)", graphAlphaCount.toString());
metadata.put("betas in graph (after pre-filtering)", graphBetaCount.toString());
metadata.put("high overlap threshold", highThreshold.toString());
metadata.put("low overlap threshold", lowThreshold.toString());
metadata.put("minimum overlap percent", minOverlapPercent.toString());

View File

@@ -1,6 +1,6 @@
import java.io.Serializable;
public class Vertex implements Serializable {
public class Vertex implements Serializable, Comparable<Vertex> {
private SequenceType type;
private Integer vertexLabel;
private Integer sequence;
@@ -89,4 +89,9 @@ public class Vertex implements Serializable {
return sb.toString();
}
@Override
public int compareTo(Vertex other) {
return this.vertexLabel - other.getVertexLabel();
}
}