Java Code Examples for java.util.DoubleSummaryStatistics#getAverage()
The following examples show how to use
java.util.DoubleSummaryStatistics#getAverage() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: BankStatementProcessor.java From Real-World-Software-Development with Apache License 2.0 | 5 votes |
public SummaryStatistics summarizeTransactions() { final DoubleSummaryStatistics doubleSummaryStatistics = bankTransactions.stream() .mapToDouble(BankTransaction::getAmount) .summaryStatistics(); return new SummaryStatistics(doubleSummaryStatistics.getSum(), doubleSummaryStatistics.getMax(), doubleSummaryStatistics.getMin(), doubleSummaryStatistics.getAverage()); }
Example 2
Source File: DoubleSummary.java From jenetics with Apache License 2.0 | 5 votes |
/** * Return a new value object of the statistical summary, currently * represented by the {@code statistics} object. * * @param statistics the creating (mutable) statistics class * @return the statistical moments */ public static DoubleSummary of(final DoubleSummaryStatistics statistics) { return new DoubleSummary( statistics.getCount(), statistics.getMin(), statistics.getMax(), statistics.getSum(), statistics.getAverage() ); }
Example 3
Source File: EntropyForEachLine.java From SLP-Core with MIT License | 4 votes |
public static void main(String[] args) { if (args.length < 2 || !new File(args[1]).isFile()) { System.err.println("Please provide a train file/directory and a test file for this example"); return; } File train = new File(args[0]); File test = new File(args[1]); Lexer lexer = new JavaLexer(); // Use a Java lexer; if your code is already lexed, use whitespace or tokenized lexer LexerRunner lexerRunner = new LexerRunner(lexer, false); // Don't model lines in isolation for code files. // We will still get per-line, per-token entropies lexerRunner.setSentenceMarkers(true); // Add start and end markers to the files lexerRunner.setExtension("java"); // We only lex Java files Vocabulary vocabulary = new Vocabulary(); // Create an empty vocabulary Model model = new JMModel(6, new GigaCounter()); // Standard smoothing for code, giga-counter for large corpora model = MixModel.standard(model, new CacheModel()); // Use a simple cache model; see JavaRunner for more options ModelRunner modelRunner = new ModelRunner(model, lexerRunner, vocabulary); // Use above lexer and vocabulary modelRunner.learnDirectory(train); // Teach the model all the data in "train" // Modeling one file gives us entropy per-line, per token in nested list. See also modelRunner.modelDirectory List<List<Double>> fileEntropies = modelRunner.modelFile(test); List<List<String>> fileTokens = lexerRunner.lexFile(test) // Let's also retrieve the tokens on each line .map(l -> l.collect(Collectors.toList())) .collect(Collectors.toList()); for (int i = 0; i < fileEntropies.size(); i++) { List<String> lineTokens = fileTokens.get(i); List<Double> lineEntropies = fileEntropies.get(i); // First use Java's stream API to summarize entropies on this line // (see modelRunner.getStats for summarizing file or directory results) DoubleSummaryStatistics lineStatistics = lineEntropies.stream() .mapToDouble(Double::doubleValue) .summaryStatistics(); double averageEntropy = lineStatistics.getAverage(); // Then, print out the average entropy and the entropy for every token on this line System.out.printf("Line %d, avg.: %.4f, tokens:", i + 1, averageEntropy); for (int j = 0; j < lineTokens.size(); j++) { System.out.printf(" %s: %.4f", lineTokens.get(j), lineEntropies.get(j)); } System.out.println(); } }