object IPFIXFields
Collections of fields relevant to IPFIX data sources.
These are primarily the fields produced by YAF version 3 and up, although the same fields are produced by other tools.
These collections and the individual fields within them may be passed to spark.read.fields directly and all of the fields in the collection will be included in the output.
If using a REPL with tab-completion, many of the provided symbols may be accessed using tab completion. In addition, most of the fields and field collections provide useful information if converted to strings. For example:
scala> import org.cert.netsa.mothra.datasources.ipfix.IPFIXFields import org.cert.netsa.mothra.datasources.ipfix.IPFIXFields scala> println(IPFIXFields.default) // Most common fields for flow-based traffic analysis FieldsSpec( // #default // Core flow fields, including time and the 5-tuple FieldsSpec( // #core // Timestamp of the first packet of this Flow "startTime" -> "func:startTime", // Timestamp of the final packet of this Flow "endTime" -> "func:endTime", // IPv4 or IPv6 source for incoming packets in this Flow "sourceIPAddress" -> "func:sourceIPAddress", // Any source port identifier from the transport header "sourcePort" -> "func:sourcePort", // IPv4 or IPv6 destination for incoming packets in this Flow "destinationIPAddress" -> "func:destinationIPAddress", // Any destination port identifier from the transport header "destinationPort" -> "func:destinationPort", // Protocol number from the IP packet header "protocolIdentifier" ), ... )
This is printing out all of the fields in the "default" collection. The fields each include their definition as an IPFIX field expression and a short description. Some fields are grouped into larger collections which are also given names and descriptions.
Each of the field names like "startTime"
may be used as an index
to pull out a specific field from the collection:
scala> println(IPFIXFields.default("startTime")) FieldsSpec( // Timestamp of the first packet of this Flow "startTime" -> "func:startTime" )
And the collection names such as "#core"
may also be used to pull
out sub-collections:
scala> println(IPFIXFields.default("#tcpflags")) // Fields for TCP protocol flag information FieldsSpec( // #tcpflags // TCP flags of first incoming packet of a TCP Flow "initialTCPFlags", // TCP flags of first outgoing packet of a TCP Flow "reverseInitialTCPFlags", // Union of TCP flags of all incoming packets after the first "unionTCPFlags", // Union of TCP flags of all outgoing packets after the first "reverseUnionTCPFlags" )
- Alphabetic
- By Inheritance
- IPFIXFields
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- val core: FieldsSpec
Core IPFIX flow fields, including time and the 5-tuple.
Core IPFIX flow fields, including time and the 5-tuple.
startTime
,endTime
,sourceIPAddress
,sourcePort
,destinationIPAddress
,destinationPort
, andprotocolIdentifier
. - val counts: FieldsSpec
Fields for basic volume statistics.
Fields for basic volume statistics.
packetCount
,reversePacketCount
,octetCount
, andreverseOctetCount
. - val counts_uniflow: FieldsSpec
Fields for basic volume statistics for uniflow.
Fields for basic volume statistics for uniflow.
packetCount
andoctetCount
. - val default: FieldsSpec
Default flow fields, including core fields, label fields, counts, and TCP flags, all as described above.
- val default_uniflow: FieldsSpec
Default uniflow fields, including core fields, and uniflow versions of label fields, counts, and TCP flags, all as described above.
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def equals(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef → Any
- val everything: FieldsSpec
Accessible collection of all fields useful for working with YAF3 in particular.
Accessible collection of all fields useful for working with YAF3 in particular. You should not use all of these at once, but select individual fields
everything("fieldName")
or collectionseverything("#collectionName")
that you wish to use. You can select multiple items at once likeeverything("field1", "#coll1", "field2", ...)
. You can use the facilities described at org.cert.netsa.mothra.datasources to add your own fields. And finally, note that if the same field occurs more than once only the first definition will be kept. - def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- def hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- val label: FieldsSpec
Fields used for flow labeling and partitioning.
Fields used for flow labeling and partitioning.
observationDomainId
,vlanId
,reverseVlanId
, andsilkAppLabel
. - val label_uniflow: FieldsSpec
Fields used for flow labeling and partitioning for uniflow.
Fields used for flow labeling and partitioning for uniflow.
observationDomainId
,vlanId
,silkAppLabel
. - val legacy: FieldsSpec
Older default definitions, which are no longer recommanded.
Older default definitions, which are no longer recommanded. These names are closer to what users of SiLK might expect, but this hides some of the differences in the IPFIX format and the bidirectional flow data commonly used in IPFIX.
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- val tcpflags: FieldsSpec
Fields for TCP flag information.
Fields for TCP flag information.
initialTCPFlags
,reverseInitialTCPFlags
,unionTCPFlags
, andreverseUnionTCPFlags
. - val tcpflags_uniflow: FieldsSpec
Fields for TCP flag information for uniflow.
Fields for TCP flag information for uniflow.
initialTCPFlags
andunionTCPFlags
. - def toString(): String
- Definition Classes
- AnyRef → Any
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()
- object cert_tool extends Multiple
Miscellaneous internal templates for CERT NetSA tools.
Miscellaneous internal templates for CERT NetSA tools.
Note: These are *not* included in everything.
- object dpi extends Multiple
Flow fields for deep packet inspection data, broken down by protocol.
- object plugins extends Multiple
Flow fields for fields provided by YAF plugins, broken down by plugin.
This is documentation for Mothra, a collection of Scala and Spark library functions for working with Internet-related data. Some modules contain APIs of general use to Scala programmers. Some modules make those tools more useful on Spark data-processing systems.
Please see the documentation for the individual packages for more details on their use.
Scala Packages
These packages are useful in Scala code without involving Spark:
org.cert.netsa.data
This package, which is collected as the
netsa-data
library, provides types for working with various kinds of information:org.cert.netsa.data.net
- types for working with network dataorg.cert.netsa.data.time
- types for working with time dataorg.cert.netsa.data.unsigned
- types for working with unsigned integral valuesorg.cert.netsa.io.ipfix
The
netsa-io-ipfix
library provides tools for reading and writing IETF IPFIX data from various connections and files.org.cert.netsa.io.silk
To read and write CERT NetSA SiLK file formats and configuration files, use the
netsa-io-silk
library.org.cert.netsa.util
The "junk drawer" of
netsa-util
so far provides only two features: First, a method for equipping Scala scala.collection.Iterators with exception handling. And second, a way to query the versions of NetSA libraries present in a JVM at runtime.Spark Packages
These packages require the use of Apache Spark:
org.cert.netsa.mothra.datasources
Spark datasources for CERT file types. This package contains utility features which add methods to Apache Spark DataFrameReader objects, allowing IPFIX and SiLK flows to be opened using simple
spark.read...
calls.The
mothra-datasources
library contains both IPFIX and SiLK functionality, whilemothra-datasources-ipfix
andmothra-datasources-silk
contain only what's needed for the named datasource.org.cert.netsa.mothra.analysis
A grab-bag of analysis helper functions and example analyses.
org.cert.netsa.mothra.functions
This single Scala object provides Spark SQL functions for working with network data. It is the entirety of the
mothra-functions
library.