Set up Emacs+Ensime
The basic reference is on the official GitHub page of ensime: https://github.com/ensime/ensime-emacs
First, M-x package-install
in Emacs, choose ensime
, mark as I
and press X
. This installs the package ensime
for your current Emacs. Then update the .emacs
file with
(require 'package)
(package-initialize)
(add-to-list 'package-archives
'("melpa" . "http://melpa.org/packages/") t)
(package-initialize)
(when (not package-archive-contents)
(package-refresh-contents))
;; ensime for scala mode hook
(require 'ensime)
(add-hook 'scala-mode-hook 'ensime-scala-mode-hook)
;; OPTIONAL
;; there are some great Scala yasnippets, browse through:
;; https://github.com/AndreaCrotti/yasnippet-snippets/tree/master/scala-mode
(add-hook 'scala-mode-hook #'yas-minor-mode)
;; but company-mode / yasnippet conflict. Disable TAB in company-mode with
(define-key company-active-map [tab] nil)
The next step is to choose a project management tool. You can choose among sbt, gradle, and maven. For this simple quick test I'm using sbt. If that's also your preference, you have to create file ~/.sbt/0.13/plugins/plugins.sbt
and write in the following:
addSbtPlugin("org.ensime" % "ensime-sbt" % "0.2.0")
Now it is time to create your project folder, cd
into it, then
mkdir -p src/{main,test}/scala
Also in the project folder, create build.sbt
with the following content:
name := "hello_scala"
version := "1.0"
scalaVersion := "2.10.5"
Note that the blank lines are necessary. Now let us generate .ensime file by typing into the terminal
A hidden file .ensime
will be added to the project folder, together with a folder project
. Create a file under src/main/scala/
, say with name Main.scala
. Write in it
package greeter
object Hello extends App {
println("Hello World")
}
and save. The mini-buffer in Emacs reads as (Scala [ENSIME: (Disconnected)])
. We have to connect it by M-x ensime
and press ENTER
in Emacs. If everything works well, you should see in the mini-buffer
ENSIME ready. May the source be with you.
If we simply C-c C-b s
, it will bring up the sbt console. Run project by typing 'run' in the sbt console. You will see the output message
Running sbt
[info] Loading global plugins from /home/trgao10/.sbt/0.13/plugins
[info] Set current project to hello_scala (in build file:/home/trgao10/Work/Scala/webGraphInfer/)
> run
[info] Compiling 1 Scala source to /home/trgao10/Work/Scala/webGraphInfer/target/scala-2.10/classes...
[info] Running greeter.Hello
Hello World
[success] Total time: 3 s, completed Sep 30, 2015 10:54:33 PM
>
This indicates a successful setup.
Compiling Spark from Source
Compiling software libraries is often a huge pain — it often requires domain knowledge before you step into a domain. Compiling Spark
is however an important part of developing applications with Spark
since it evolves aggressively with new version poping up every a couple of months or so. Here we demonstrate how to compile the most recent version 1.5.1
(released on Oct 02, 2015) on a Ubuntu Linux laptop with amd64 architecture. Of course the official "Building Spark" page is the definitive starting point.
First of all, download the source code, untar, and cd
into the decompressed folder. Make sure you have installed maven
:
sudo apt-get install maven
Temporarily increase the memory budget for maven
(if you are using fish
terminal instead of bash
, you have to switch to bash
before invoking export
):
export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"
Start compiling. Catcha: don't prepend your commands with sudo
!
mvn -DskipTests clean package
If you would like to build against certain version of hadoop (e.g., 2.4.0), try the following:
mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean package
The particular issue with Spark 1.5.1
is that it requires maven 3.3.3
, while only 3.0.5
is available in the official Ubuntu repository. Let's purge
the old version and replace it with the more up-to-date one. First, remove the old stuff:
sudo apt-get purge -y maven
Then we follow the steps in this link:
-
Download Apache Maven 3.3.3 binary from repository using the following command
wget http://mirrors.sonic.net/apache/maven/maven-3/3.3.3/binaries/apache-maven-3.3.3-bin.tar.gz
-
Unzip the binary with tar
tar -zxf apache-maven-3.3.3-bin.tar.gz
-
Move the application directory to /usr/local
sudo cp -R apache-maven-3.3.3 /usr/local
-
Make a soft link in /usr/bin for universal access of mvn
sudo ln -s /usr/local/apache-maven-3.3.3/bin/mvn /usr/bin/mvn
-
Verifify mvn installation
You should see the following printout:
Apache Maven 3.3.3 (7994120775791599e205a5524ec3e0dfe41d4a06; 2015-04-22T07:57:37-04:00)
Maven home: /usr/local/apache-maven-3.3.3
Java version: 1.8.0_60, vendor: Oracle Corporation
Java home: /usr/lib/jvm/java-8-oracle/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.13.0-65-generic", arch: "amd64", family: "unix"
After successfully compiling Spark
, move it to /usr/local/
:
sudo mv ./spark-1.5.1 /usr/local/
It is most convenient if we build a soft link and only update the link when a newer version of Spark
is installed. Installing a new Spark
version is as simple as
sudo rm /usr/local/share/spark
sudo ln -s /usr/local/spark-1.5.1 /usr/local/share/spark
Check the version is already updated:
trgao10@Terranius:$ spark-shell
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's repl log4j profile: org/apache/spark/log4j-defaults-repl.properties
To adjust logging level use sc.setLogLevel("INFO")
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.5.1
/_/
Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_80)
Type in expressions to have them evaluated.
Type :help for more information.
15/10/08 00:33:56 WARN Utils: Your hostname, Terranius resolves to a loopback address: 127.0.1.1; using 192.168.2.14 instead (on interface wlan0)
15/10/08 00:33:56 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/10/08 00:33:57 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
Spark context available as sc.
SQL context available as sqlContext.
Using GraphX to Build a Bipartite Graph
Like many popular graph processing libraries, GraphX represents a graph as a property graph. This is a class defined as
class Graph[VD, ED] {
val vertices: VertexRDD[VD]
val edges: EdgeRDD[ED,VD]
}
VD
and ED
here are Scala-type parameters of the classes VertexRDD
, EdgeRDD
, and Graph
. These type parameters can be primitive types such as String
or Int
, but they can also be user-defined classes. See the following schematic definition of VertexRDD
(from the offial Spark documentation).
class VertexRDD[VD] extends RDD[(VertexID, VD)] {
// Filter the vertex set but preserves the internal index
def filter(pred: Tuple2[VertexId, VD] => Boolean): VertexRDD[VD]
// Transform the values without changing the ids (preserves the internal index)
def mapValues[VD2](map: VD => VD2): VertexRDD[VD2]
def mapValues[VD2](map: (VertexId, VD) => VD2): VertexRDD[VD2]
// Show only vertices unique to this set based on their VertexId's
def minus(other: RDD[(VertexId, VD)])
// Remove vertices from this set that appear in the other set
def diff(other: VertexRDD[VD]): VertexRDD[VD]
// Join operators that take advantage of the internal indexing to accelerate joins (substantially)
def leftJoin[VD2, VD3](other: RDD[(VertexId, VD2)])(f: (VertexId, VD, Option[VD2]) => VD3): VertexRDD[VD3]
def innerJoin[U, VD2](other: RDD[(VertexId, U)])(f: (VertexId, VD, U) => VD2): VertexRDD[VD2]
// Use the index on this RDD to accelerate a `reduceByKey` operation on the input RDD.
def aggregateUsingIndex[VD2](other: RDD[(VertexId, VD2)], reduceFunc: (VD2, VD2) => VD2): VertexRDD[VD2]
}
The EdgeRDD
object takes one type parameter ED
, and is actually a subclass that extends RDD[Edge[ED]]
.
For the website data, each entry consists of the following properties:
- dtbd_id
- domain_name
- ip_address
- traffic_rank
- class_ecomm
- date_found
- isp_name
- admin_email
- dns
- enforce_status
- registrant
- registrar
We intend to model the website data as a bipartite graph. There will be two different types of vertices in this graph: "Domain" and "Info". Since GraphX
only allows for a unified vertex type in a graph, we have to build an abstract vertex type and derive from it the "Domian Vertex" and "Info Vertex" subclasses. Defining a class in scala
is easy:
class VertexProperty extends Serializable {}
It is crucial that this class has to extend the "Serializable" object, for otherwise Spark
will complain about missing class constructors when it comes to serializing the derived subclasses.
Subclass Domain
:
case class Domain(val dtbd_id: String, val class_ecomm: String, val date_found: String, val isp_name: String, val enforce_status: String, val registrar: String, val traffic_rank: String) extends VertexProperty;
Subclass Info
:
case class Info(val info: String, val category: String) extends VertexProperty;
Let's start building the graph. First we extract all domain vertices, one for each line in the raw data:
val domainRDD: RDD[(VertexId, VertexProperty)] = rawDataWithIndex.map { line =>
val ID = line._2
val DomainInfo = line._1
(ID, Domain(DomainInfo(schemaIndexMap("dtbd_id")), DomainInfo(schemaIndexMap("class_ecomm")), DomainInfo(schemaIndexMap("date_found")), DomainInfo(schemaIndexMap("isp_name")), DomainInfo(schemaIndexMap("enforce_status")), DomainInfo(schemaIndexMap("registrar")), DomainInfo(schemaIndexMap("traffic_rank"))))
};
schemaIndexMap
is a Map
in scala
that hashes each schema field to its position in the raw data record. The only reason it appears here is because I did not find a Pandas
-like library in scala
....
val schemaArray = "dtbd_id,domain_name,ip_address,traffic_rank,class_ecomm,date_found,isp_name,admin_email,dns,enforce_status,registrant,registrar".split(",");
val schemaIndexMap = Map(schemaArray.zip((0 to schemaArray.length-1)).toArray: _*);
Constructing all Info
vertices requires going through the raw data again. In order to save spatial complexity, we create Info
vertices only for non-zero info fields.
case class Info(val info: String, val category: String) extends VertexProperty;
val infoCatMap = Map("domain"->"domain_name", "ip"->"ip_address", "email"->"admin_email", "dns"->"dns", "name"->"registrant");
def peekInfo (data: Array[String], infoType: String) : String = {
data(schemaIndexMap(infoCatMap(infoType)))
}
def makeInfo (data: Array[String], infoType: String) : Info = {
val info = peekInfo(data, infoType)
if (info == "")
return null
else
return Info(info, infoType)
}
val infoRDD: RDD[(VertexId, VertexProperty)] = rawDataWithIndex.flatMap { line =>
val data = line._1
var infoArray = Array[Info]()
if (peekInfo(data, "domain") != "")
infoArray = infoArray :+ makeInfo(data, "domain")
if (peekInfo(data, "ip") != "")
infoArray = infoArray :+ makeInfo(data, "ip")
if (peekInfo(data, "email") != "")
infoArray = infoArray :+ makeInfo(data, "email")
if (peekInfo(data, "dns") != "")
infoArray = infoArray :+ makeInfo(data, "dns")
if (peekInfo(data, "name") != "")
infoArray = infoArray :+ makeInfo(data, "name")
infoArray
}.distinct().zipWithIndex.map(line => (line._2+dataSize, line._1));
Now we have to build edges. They are initiated with the syntax
Edge(sourceIdx, targetIdx, edgeProperty)
So we need the source and target indices. The source indices are easy to find: they are simply the indices of the records in domainRDD
(that's why we started with rawDataWithIndex
). The Map
object infoIndexMap
is used for quickly referencing the index of the Info
vertices; note that Info
vertices should be indexed after Domain
vertices — this is why we applied zipWithIndex
to the raw data, instead of zipWithUniqueId
which is much faster.
val infoIndexMap = sc.broadcast(infoRDD.map { line =>
((line._2).asInstanceOf[Info].info, line._1)
}.collectAsMap());
A key subtlty in building infoIndexMap
is that we have to cast a VertexProperty
class into an instance of Info
subclass. For a discussion on casting types in scala
, see this post. Also note that we wrapped infoIndexMap
into a broadcast
variable in scala
.
The links can now be easily built:
val linkRDD: RDD[Edge[String]] = rawDataWithIndex.flatMap { line =>
val data = line._1
var edgeArray = Array[Edge[String]]()
if (peekInfo(data, "domain") != "")
edgeArray = edgeArray :+ Edge(line._2, infoIndexMap.value(peekInfo(data, "domain")), "")
if (peekInfo(data, "ip") != "")
edgeArray = edgeArray :+ Edge(line._2, infoIndexMap.value(peekInfo(data, "ip")), "")
if (peekInfo(data, "email") != "")
edgeArray = edgeArray :+ Edge(line._2, infoIndexMap.value(peekInfo(data, "email")), "")
if (peekInfo(data, "dns") != "")
edgeArray = edgeArray :+ Edge(line._2, infoIndexMap.value(peekInfo(data, "dns")), "")
if (peekInfo(data, "name") != "")
edgeArray = edgeArray :+ Edge(line._2, infoIndexMap.value(peekInfo(data, "name")), "")
edgeArray
}
With all these preparations, building a graph and extracting its connected components in GraphX
is a piece of cake.
val mmBG: Graph[VertexProperty, String] = Graph(domainRDD.union(infoRDD), linkRDD);
val ccMMBG = mmBG.connectedComponents();
The ccMMBG
object is a new graph whose indices are all of the type (VertexId, VertexId)
, in which the second VertexId
is the smallest vertex id of all vertices in the same connected component as the vertex with the first VertexId
. In the end of the code we print out some simple statistics for the connected components of the constructed bipartite graph.
println("Total number of edges in the graph: " + mmBG.edges.count);
println("Total number of vertices in the graph: " + mmBG.vertices.count);
val ccNumVertices =
(ccMMBG.vertices.map(pair => (pair._2,1))
.reduceByKey(_+_) // count the number of vertices contained in each connected component (indexed by the smallest vertex index in the connecte dcomponent)
.map(pair => pair._2)) // only maintain the number of vertices counted
println("Number of Connected Components: " + ccNumVertices.count);
ListMap(ccNumVertices.countByValue().toSeq.sortBy(_._1):_*).foreach(line => println(line._2 + " connected component(s) with " + line._1 + " vertices"));
The last line in this print section involves sorting a Map
object in scala
. For more discussions about this topic, check this link to an online verison of "Scala Cookbook", which might be of great help in itself.
Here is the complete Main.scala
:
package sparkGraph
import org.apache.spark._
import org.apache.spark.graphx._
import org.apache.spark.rdd.RDD
import scala.collection.immutable.ListMap
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.log4j.Level
import org.apache.log4j.Logger
object sparkGraph {
def main (args: Array[String]) {
Logger.getLogger("org").setLevel(Level.OFF);
Logger.getLogger("akka").setLevel(Level.OFF);
val conf = new SparkConf().setAppName("SparkGraph").setMaster("local[*]");
val sc = new SparkContext(conf);
val schemaArray = "dtbd_id,domain_name,ip_address,traffic_rank,class_ecomm,date_found,isp_name,admin_email,dns,enforce_status,registrant,registrar".split(",");
val schemaIndexMap = Map(schemaArray.zip((0 to schemaArray.length-1)).toArray: _*);
val numFields = schemaArray.length;
val rawDataWithIndex = (sc.textFile("/home/trgao10/Work/Scala/SparkScalaGraph/data/websites_clean_small.csv")
.map(_.split(",\t"))
.filter(_.length == numFields) // TODO: FIX THIS AD-HOC DATA PROCESSING
.zipWithIndex);
val dataSize = rawDataWithIndex.count();
class VertexProperty extends Serializable {}
case class Domain(val dtbd_id: String, val class_ecomm: String, val date_found: String, val isp_name: String, val enforce_status: String, val registrar: String, val traffic_rank: String) extends VertexProperty;
val domainRDD: RDD[(VertexId, VertexProperty)] = rawDataWithIndex.map { line =>
val ID = line._2
val DomainInfo = line._1
(ID, Domain(DomainInfo(schemaIndexMap("dtbd_id")), DomainInfo(schemaIndexMap("class_ecomm")), DomainInfo(schemaIndexMap("date_found")), DomainInfo(schemaIndexMap("isp_name")), DomainInfo(schemaIndexMap("enforce_status")), DomainInfo(schemaIndexMap("registrar")), DomainInfo(schemaIndexMap("traffic_rank"))))
};
case class Info(val info: String, val category: String) extends VertexProperty;
val infoCatMap = Map("domain"->"domain_name", "ip"->"ip_address", "email"->"admin_email", "dns"->"dns", "name"->"registrant");
def peekInfo (data: Array[String], infoType: String) : String = {
data(schemaIndexMap(infoCatMap(infoType)))
}
def makeInfo (data: Array[String], infoType: String) : Info = {
val info = peekInfo(data, infoType)
if (info == "")
return null
else
return Info(info, infoType)
}
val infoRDD: RDD[(VertexId, VertexProperty)] = rawDataWithIndex.flatMap { line =>
val data = line._1
var infoArray = Array[Info]()
if (peekInfo(data, "domain") != "")
infoArray = infoArray :+ makeInfo(data, "domain")
if (peekInfo(data, "ip") != "")
infoArray = infoArray :+ makeInfo(data, "ip")
if (peekInfo(data, "email") != "")
infoArray = infoArray :+ makeInfo(data, "email")
if (peekInfo(data, "dns") != "")
infoArray = infoArray :+ makeInfo(data, "dns")
if (peekInfo(data, "name") != "")
infoArray = infoArray :+ makeInfo(data, "name")
infoArray
}.distinct().zipWithIndex.map(line => (line._2+dataSize, line._1));
val infoIndexMap = sc.broadcast(infoRDD.map { line =>
((line._2).asInstanceOf[Info].info, line._1)
}.collectAsMap());
val linkRDD: RDD[Edge[String]] = rawDataWithIndex.flatMap { line =>
val data = line._1
var edgeArray = Array[Edge[String]]()
if (peekInfo(data, "domain") != "")
edgeArray = edgeArray :+ Edge(line._2, infoIndexMap.value(peekInfo(data, "domain")), "")
if (peekInfo(data, "ip") != "")
edgeArray = edgeArray :+ Edge(line._2, infoIndexMap.value(peekInfo(data, "ip")), "")
if (peekInfo(data, "email") != "")
edgeArray = edgeArray :+ Edge(line._2, infoIndexMap.value(peekInfo(data, "email")), "")
if (peekInfo(data, "dns") != "")
edgeArray = edgeArray :+ Edge(line._2, infoIndexMap.value(peekInfo(data, "dns")), "")
if (peekInfo(data, "name") != "")
edgeArray = edgeArray :+ Edge(line._2, infoIndexMap.value(peekInfo(data, "name")), "")
edgeArray
}
val mmBG: Graph[VertexProperty, String] = Graph(domainRDD.union(infoRDD), linkRDD);
val ccMMBG = mmBG.connectedComponents();
println("Total number of edges in the graph: " + mmBG.edges.count);
println("Total number of vertices in the graph: " + mmBG.vertices.count);
val ccNumVertices =
(ccMMBG.vertices.map(pair => (pair._2,1))
.reduceByKey(_+_) // count the number of vertices contained in each connected component (indexed by the smallest vertex index in the connecte dcomponent)
.map(pair => pair._2)) // only maintain the number of vertices counted
println("Number of Connected Components: " + ccNumVertices.count);
// ccMMBG.vertices consists of (VertexId, VertexId) pairs, in which the second VertexId represents the smallest ID of the vertex in the same connected component
ListMap(ccNumVertices.countByValue().toSeq.sortBy(_._1):_*).foreach(line => println(line._2 + " connected component(s) with " + line._1 + " vertices"));
}
}
Finally, we are ready to compile and run the code. With a project folder and build.sbt
structured as in the section "Set up Emacs+Ensime", we can use sbt
in the following manner:
trgao10@Terranius ~/W/S/webGraphInfer> sbt
[info] Loading global plugins from /home/trgao10/.sbt/0.13/plugins
[info] Set current project to scala_graphx (in build file:/home/trgao10/Work/Scala/webGraphInfer/)
> compile
[info] Compiling 1 Scala source to /home/trgao10/Work/Scala/webGraphInfer/target/scala-2.10/classes...
[success] Total time: 15 s, completed Oct 7, 2015 11:12:23 PM
> run
We are able to process a bipartite graph with 800,000 vertices in just 49 seconds. See terminal outputs below. The largest connected component contains 266,475 vertices.
[info] Running sparkGraph.sparkGraph
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/10/07 23:13:15 INFO Remoting: Starting remoting
15/10/07 23:13:16 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.2.14:50443]
Total number of edges in the graph: 801189
Total number of vertices in the graph: 289326
Number of Connected Components: 1426
125 connected component(s) with 1 vertices
31 connected component(s) with 2 vertices
134 connected component(s) with 3 vertices
47 connected component(s) with 4 vertices
36 connected component(s) with 5 vertices
139 connected component(s) with 6 vertices
117 connected component(s) with 7 vertices
66 connected component(s) with 8 vertices
83 connected component(s) with 9 vertices
72 connected component(s) with 10 vertices
65 connected component(s) with 11 vertices
39 connected component(s) with 12 vertices
25 connected component(s) with 13 vertices
38 connected component(s) with 14 vertices
34 connected component(s) with 15 vertices
22 connected component(s) with 16 vertices
23 connected component(s) with 17 vertices
25 connected component(s) with 18 vertices
18 connected component(s) with 19 vertices
12 connected component(s) with 20 vertices
14 connected component(s) with 21 vertices
12 connected component(s) with 22 vertices
21 connected component(s) with 23 vertices
8 connected component(s) with 24 vertices
11 connected component(s) with 25 vertices
13 connected component(s) with 26 vertices
13 connected component(s) with 27 vertices
7 connected component(s) with 28 vertices
6 connected component(s) with 29 vertices
11 connected component(s) with 30 vertices
7 connected component(s) with 31 vertices
6 connected component(s) with 32 vertices
2 connected component(s) with 33 vertices
5 connected component(s) with 34 vertices
6 connected component(s) with 35 vertices
5 connected component(s) with 36 vertices
4 connected component(s) with 37 vertices
4 connected component(s) with 38 vertices
5 connected component(s) with 39 vertices
4 connected component(s) with 40 vertices
2 connected component(s) with 41 vertices
2 connected component(s) with 42 vertices
4 connected component(s) with 43 vertices
4 connected component(s) with 44 vertices
6 connected component(s) with 45 vertices
1 connected component(s) with 46 vertices
5 connected component(s) with 47 vertices
3 connected component(s) with 48 vertices
3 connected component(s) with 49 vertices
2 connected component(s) with 50 vertices
2 connected component(s) with 51 vertices
1 connected component(s) with 52 vertices
3 connected component(s) with 53 vertices
2 connected component(s) with 54 vertices
3 connected component(s) with 55 vertices
2 connected component(s) with 56 vertices
2 connected component(s) with 57 vertices
1 connected component(s) with 59 vertices
1 connected component(s) with 60 vertices
1 connected component(s) with 61 vertices
1 connected component(s) with 62 vertices
2 connected component(s) with 63 vertices
1 connected component(s) with 64 vertices
2 connected component(s) with 66 vertices
1 connected component(s) with 67 vertices
3 connected component(s) with 68 vertices
3 connected component(s) with 69 vertices
2 connected component(s) with 70 vertices
1 connected component(s) with 71 vertices
2 connected component(s) with 72 vertices
1 connected component(s) with 73 vertices
1 connected component(s) with 74 vertices
1 connected component(s) with 77 vertices
1 connected component(s) with 78 vertices
2 connected component(s) with 81 vertices
1 connected component(s) with 84 vertices
1 connected component(s) with 86 vertices
2 connected component(s) with 88 vertices
1 connected component(s) with 92 vertices
1 connected component(s) with 94 vertices
1 connected component(s) with 97 vertices
1 connected component(s) with 98 vertices
1 connected component(s) with 99 vertices
2 connected component(s) with 104 vertices
1 connected component(s) with 107 vertices
1 connected component(s) with 110 vertices
1 connected component(s) with 113 vertices
1 connected component(s) with 115 vertices
1 connected component(s) with 118 vertices
1 connected component(s) with 121 vertices
1 connected component(s) with 124 vertices
1 connected component(s) with 125 vertices
1 connected component(s) with 129 vertices
1 connected component(s) with 130 vertices
1 connected component(s) with 135 vertices
1 connected component(s) with 136 vertices
1 connected component(s) with 149 vertices
1 connected component(s) with 150 vertices
1 connected component(s) with 153 vertices
1 connected component(s) with 158 vertices
1 connected component(s) with 159 vertices
1 connected component(s) with 161 vertices
1 connected component(s) with 181 vertices
1 connected component(s) with 213 vertices
1 connected component(s) with 230 vertices
1 connected component(s) with 256 vertices
1 connected component(s) with 284 vertices
1 connected component(s) with 301 vertices
1 connected component(s) with 313 vertices
1 connected component(s) with 266475 vertices
[success] Total time: 49 s, completed Oct 7, 2015 11:14:00 PM
To learn more about sbt
, check its official online tutorial.