Transcript
This transcript was autogenerated. To make changes, submit a PR.
Hello everyone, and welcome to this session on open telemetry,
and more specifically the open telemetry collector.
So, to give a brief outline, we'll start by giving an
overview of the open telemetry project and its functionality.
We will talk about the collector and its architecture,
and from there we will go to the main section of this
presentation, how to configure the collector.
We'll have some live demos that show how to configure
and deploy the collector, and the goal is
to help you be able to configure the collector for
whatever your use case is. But before we get too far,
let's get started with some introductions a
little bit more about me my name is Curtis Robert. I am
a software engineer at Cisco. I've been working in the
observability space for about two years now,
and I'm currently an approver in the contrib distribution
of the open telemetry collector. Next up,
my co presenter Steve thanks Curtis.
My name is Steve Flanders and I'm an engineering leader at Cisco.
I've been involved with the opencensus and open telemetry project since
the very beginning and have more than a decade of experience in the monitoring
and observability space, including leading log initiatives
at vmware distributed tracing with omniscient, which is now the Splunk
APM product, and now Metrics is part of Splunk infrastructure
monitoring. I'm currently writing a book about open symmetry, and I'm excited
for it to launch later on this year.
All right, so let's talk a little bit about what open symmetry
is first before jumping into our topic. So open symmetry
is an observability framework and it's a toolkit. But at the end
of the day, it really provides three building blocks. It provides
a specification which is basically an open standard on which everything else
is built. And this is what really allows and empowers open
symmetry to be an open standard. On top of it, you have
instrumentation libraries, which is a way across different
languages to instrument an application, generating traces,
metrics, logs, and additional signals, as they're called in open symmetry.
And then there's a data collection component known as the open Symmetry collector,
though there are other collector components as well. The collector allows
you to ingest, process and export telemetry data,
and we'll be talking more about that soon.
Now, why does open telemetry matter at the end of the
day? There are several kind of factors at play here. One is
it is a single open standard that anyone can adopt, whether it
is an end user, another open source project,
or even vendors and its vendor agnostic, which means
it's not going to lock you into one particular observability platform at
the end of the day, it gives you flexibility and choice.
Second, opensymmetry really empowers you to have data portability
and control. If you want to send your data to multiple different observability backends,
or if you want to change where the observability data is going, you have the
flexibility of doing that with open telemetry. In addition,
you control your data. So if you want to, for example,
provide like crud metadata operations, you can do
that with the open telemetry components that are available now. Open telemetry
is the second most active project in the CNCF, behind only
Kubernetes, which I think really speaks to the fact that this is a problem
that needs to be solved, and it's adopted by a wide range
of end users, but also contribute to the project, and it's supported
by every major vendor today. In addition, there are a
variety of open source projects that also have open telemetry integration,
including Kubernetes, Istio, Prometheus and Jaeger.
Now let me tell you a little bit more about the open telemetry collector,
which is kind of the primary focus of this talk. The collector,
as kind of mentioned, is a single binary that can be
used to ingest, process and export telemetry
data. It actually supports all signal data. So you can
deploy a single agent or gateway instance in order to
collect traces, metrics and logs. And it's very
flexible through its pipeline configuration, which we'll be demonstrating here
shortly. Now, if you think about a reference architecture,
the two most common ways in which the collector would be deployed is
as an agent that can actually be with the application itself as
a sidecar, or if you're in like kubernetes as a daemon set.
So you can do like one per host, even if there are multiple applications on
that host. That's very common because it really offloads responsibilities
from the application itself, the instrumentation, allowing the agent to
kind of perform operations like buffer and retry logic.
Or it could be deployed as its own standalone instance,
typically referred to as a gateway. This is usually clustered,
so there's usually multiple of these instances, and this can act
as kind of a data aggregation or filtering layer. For example,
if you wanted to do tail based sampling, you would do it with the
gateway instance. Or if you wanted to collect from say
the Kubernetes API, where you only want to do that once again, you could
do that with a single gateway instance. So it's possible to
have your OTL instrumentation send to a locally running collector
agent. It could have that collector agent sending to a gateway cluster,
and you can have that gateway cluster sending to one or more backends.
Now of course the collector is entirely optional. You don't have to
deploy it in either of these ways. In fact, you can have instrumentation send directly
to your observability backend if you want. Again, the whole point
here that opensymmetry is providing is flexibility and choice every
step of the way. All right, now the collector is a go
binary or it's written in go and it has different packaging that's available.
So you can for example deploy this very easily on Linux, windows or macOS.
In addition to this, it works very well in containerized environments and docker
containers are provided. And there's even packaging
specific for kubernetes for the more cloud native workloads as well.
Now if you want to create your own again, you could do that, right?
The building blocks are in place, so if you need other packaging options,
they are available. Perhaps more important though is
that the collector has this notion of distributions. A distribution
is a specific set of components that are bundled
with the collector itself. If you think of the open symmetry
repositories which are hosted on GitHub, the collector has
two main repositories. The first one is core. It is located
at Opentelimetry collector and that has all the
core components that are required for a pure open telemetry environment.
For example, OTLP, which is the format in which all
data is transformed into an open telemetry, has a receiver
and exporter available in the core distribution.
Now the contrib repository, which is available at open telemetry
collector contrib. This actually has a wide range
of receivers and processors and exporters. For example,
this is where commercial vendors or even open source vendors would
have their receiver or exporter components available.
In addition, it contains functionality that you may or may not need.
For example, if you want to do crud metadata operations,
that would be a processor and it exists in the contrib repository.
So it's very likely that you need some amount of components from core and contrib.
At the end of the day, another packaging model that's available is,
in terms of distribution at least is Kubernetes. And in the case of Kubernetes,
it's actually packaged in helm. So there's a helm chart available, there's even a
helm operator. And that packaging ensures that you have the components
required to collect data from everything that Kubernetes has
to provide. So there are actually specific receivers and processors that are
enabled for you there. Now, the great part here is that there's actually
a utility available that allows you to create custom distributions
of the collector if you need it. So if you want to pick and choose
which one of these components you're pulling in, you can do that. That is great
for mature production environments where you're looking to really reduce the
surface area of things that could be configured. And it gives you a
lot more control of the collector instance itself.
Now let's jump right into the configuration because that's the primary
part of what we want to talk about today. I want to show it first
and then I'll be turning it over to Curtis to give us a nice demo
of how this all ends up playing out. So to do that,
I have to talk about the components a little bit more in depth.
So receivers are how you get data into the collector.
These can be push or pull based. That means, for example,
if you're familiar with tracing, trace data is often pushed
from the application to a collector. That would be a push
based receiver. Now, in the case of metrics, if you're familiar with Prometheus,
it's very common that some endpoint is actually being scraped. The application
metrics are being scraped themselves. That would be an example of pull based
data collection. Both those are supported. There's a wide range of receivers.
Processors are what you do with the data after you've collected it.
So if we go back to like tail based sampling or crud made metadata
operations, those would both be processors. And you have
a lot of control there on how you want the data to be processed.
Then you have exporters how you get data out of the collector.
Again, this can be push or pull based, depending on the configuration,
and then extensions. Extensions are usually things that don't touch
the data themselves. Two very good examples. One would be like health check
information for the collector instance itself. That's available as an extension.
Or let's say you want to do something like authentication or
service discovery for your receivers or even your x exporters.
Those things are usually available as extensions as well.
The newest component available is this notion of a connector. A connector
is unique in that it's both a receiver and an exporter.
It actually allows you to reprocess information either the same or
different after it's gone through a pipeline. In the collector itself,
a little bit more of an advanced use case, you typically wouldn't get be starting
with a connector component, but it is available for use
as well to ensure that you have everything that you need to fully process data
in your environment. Now the collector is configured through YAML,
very, very common in cloud native environments, for example in kubernetes.
And it's really a two step process. So all of
those components I talked about, they have to be defined and
configured in this YAML. So on the top, top half of the example
that you see on the screen, you will see things like receivers, processors and
exporters being defined and you can define more than one of them.
So for example, if we look at receivers for a second, you'll see that there's
a host receiver that's being configured and that allows you to collect metrics
and you can see that it has a custom configuration where certain scrapers are
being defined and then you have an OTLP receiver
that's defined. And because there's no additional configuration there,
what that actually means is it's taking the default configuration
that's available for the OTLP receiver today. Now just
defining these and configuring them doesn't actually enable it.
So the next step here is actually to define what's called a service pipeline
in the collector and service pipelines are signal specific.
So you'll see examples here of both metrics and traces.
And that's where you actually define like which receivers, processors and
exporters or even connectors are at play here.
So if we look at the metrics example for a sec a second, it's going
to take that host receiver, it's going to do some batch processing
and then it's going to export it via Prometheus. You can see a similar
example for traces. Now what's really interesting here is that
there's flexibility in that you can receive in one format.
Let's say we're looking at traces here, we're receiving an OTLP
format, but we're actually exporting in zipkin format.
That's what allows the collector to actually be vendor agnostic.
It can receive data in one format and export it in a different
format. And in addition you can have multiple processors. So if
you want to perform multiple different pieces, pieces of processing before this data is exported,
that's totally possible as well. The order of processors
actually matters. So it's actually done in the order in which it's defined
in the list. And that's important because maybe you want to sample
before you filter or do crud metadata operations, or maybe
you want to actually do crud metadata operations before you sample.
So depending on your business requirements there, there's a lot of flexibility and
choice every step of the way. And worth noting is actually the host
metrics receiver. So that's just a typo here.
But yeah, these receivers are all defined within the GitHub repositories.
So let's go ahead and look at those GitHub repositories. Where am I
going to find out what all these configuration options are and how I can define
them? We're going to go to the GitHub repo. That's why I talked to you
a little bit about those distributions. Here we are in the contrib repository
and we're actually looking in the receiver folder and you'll see that there
are actually multiple receivers defined. So if you're not
using the contrib distribution, if you're using core, you would have
to use a different repository here you drop the hyphen
contrib at the end. And then if you didn't want to do receivers,
you want to do processors or exporters or connectors, you would change
to that particular folder and you would see everything that's available for
that component set. Now let's go ahead and open up
one of those. So here I am going back to the core repository. So we're
in open symmetry collector, we're underneath receivers.
We're looking at the OTLP receiver and you'll see it has some great
getting started information. It gives you an example of that configuration file.
At the end of the day it shows you what options are defined
and configurable as well as what their default options are.
And there's even advanced configuration options with links to that as well.
So again, kind of rich getting started information. All of it's available in the GitHub
repo. It's actually not in the open telemetry documentation site
on open telemetry IO today. That will likely change in
the future. Now there's a few other ways you can actually configure
the collector as well. For example, you can use command line arguments.
So if you want to define this as you're starting the open symmetry collector binary,
you can do that. You can do more advanced things like within
this YAML configuration, you can actually define environmental variables that'll
get expanded for you automatically, providing a lot more
of a dynamic configuration if you need it. And then there's
a newer part of the open telemetry project called the open
telemetry transformation Language, or OTTL.
It's actually a DSL or a domain specific language that
opentelemetry is creating and adopting, which will make it much
easier and a much more standard way to define configuration going forward
across the project that is experimental or in development
mode today, but it's actively being developed and getting richer
and richer by the day. Okay, so instead of
me kind of showing you through two slides, I think it's way more
powerful to actually see a demo of this live. So I'm going to turn it
back over to Curtis and he's going to show you exactly how you can configure
and use the collector.
All right, thank you, Steve. So now that
we have a basic and general understanding of
the telemetry collector, how it works and its components,
let's jump into a demo. So for the sake
of simplicity, I'm going to be focusing on just
running the collector directly on my local system.
So as Steve mentioned earlier, we come to the
releases distribution of our collector and
we find the binary for our
own system. I'm running on Mac and
AMD processor architecture, so I just
downloaded this file to my local system and I can run
it from there. So I'll pull up the command line and I
will get started with a basic configuration.
So to run the collector locally, you simply
specify a config file,
config and point it to your.
In our demo, we'll start with a
very basic configuration and we'll just build from there.
So as Steve mentioned earlier, the collector
is made up of different components. We have the primary
types are receivers, processors, and exporters. So in
our basic ground up demo, we'll just
start with the receiver and an exporter. And Steve
mentioned this already, but the host metrics receiver
is a good place to start. It just scrapes your local
system for data and sends
it through your metrics pipeline.
So in the reading here, we have information about how to
configure the host metrics receiver. One thing to
know is the collection interval, how frequently you want to scrape
your local system for data, and then also take
note the scrapers. So since I'm on Mac,
some of them are not compatible. Cpu disk
don't work on Mac. So for our demo we'll
just use memory.
So we'll come back to our configuration file and
we'll specify receivers post metrics
collection interval. I will say 10 seconds.
And then for scrapers, as I said, we'll just use memory
for now. As far as exporters
go, I'm going to use the debug exporter.
It's always a good place to start. Just to make sure that data
is coming through as expected.
So we'll define a debug exporter.
And verbosity is good to just say detailed
to start out so that you get the output
that you can. The debug exporter just
sends all the data that's coming through your pipeline to your
standard error logger. So we
have a receiver, we have an exporter. The next step, we have to
define a metrics pipeline that the
metrics will actually flow through. So it goes
under service pipelines
and then metrics. You can have
multiple metrics pipelines. You can have logs,
traces. For this demo,
we'll just be doing metrics. So we want
a receiver here, host metrics.
As we define exporters,
debug. Okay, you can define
as many components as you want, receivers, processors,
exporters, connectors, but if they're not a pipeline,
they will not be used. So make sure to include any
component that you want to use in the relevant pipeline.
So now that we have a basic collector configuration
set up, we can come over to our command line and
we can run the collector.
Okay, it started up successfully. I'll just stop
the collector there. We can see the output,
and we come over here and we see that we're getting our memory
metric successfully. So the system
dot memory dot usage shows the bytes of memory that
are in use. And then we have different data points down here.
So we have a data point for how many bytes
are being used. We have a data point for how many bytes
are free, and we have a data point for how many bytes
of memory are inactive.
So next, the demo, we can add a processor.
And for my demo,
the resource detection processor is a
good place to start.
This will be able to add resource attributes to
the metrics flowing through your pipeline. And in
my case, I want to use the system detector,
which will add the host name of my computer
and my OS type. So we'll come back over
to the configuration file. We will specify a
resource detection processor,
and we will use the system
detector. So what this
allows us to do is to filter metrics
based on where they're coming from.
So in whatever backend you're using, you can see
all the metrics from a specific device or
environment, and you can see if something's
going wrong or if something's abnormal. So we want to
make sure that we add our resource detection processor to the pipeline.
We'll save it. We will run the collector
again. Oh,
I messed up.
Invalid key detector.
Okay, it needs to be plural.
All right, so now we see our metrics are still coming through.
But now the resource detection processor
has detected my local os and it's detected
my machine's hostname. So now any metrics
coming through the processor and its pipeline will
have the system information attached
to these metrics. From here,
I think the next step is to send our data to
somewhere more useful. So I will use
Prometheus for this. So we can look
at the Prometheus exporter. It allows us to
expose metrics at a specific endpoint that can then be scraped
by our Prometheus environment.
So I will define a Prometheus exporter.
We have to set an endpoint and
I will use localhost 8889.
This is the default port for
Prometheus. And I have set up my Prometheus environment
to scrape the endpoint.
And then the other configuration option that is helpful
here is resource to telemetry conversion.
This says to add resource attributes as
labels to Prometheus metrics. This will make
sure that we still have our host information
attached to Prometheus metrics.
So resource to telemetry
conversion enabled.
True. And then we have to add Prometheus
to our pipeline and
we can run the collector again and we'll check
to see if Prometheus is getting our metrics properly.
All right, let's start it up properly here and
we'll go over to Prometheus to output and
I'll start here to see all of the
metrics that Prometheus is
scraping and we'll see if the metric that we're looking for
came through. So we see here
system memory usage bytes. So we
can notice that this metric is named differently than what we
saw in the debug exporter. This is simply because
of Prometheus's naming conventions being
different than open telemetry. So the
metric name gets converted to Prometheus format so
we can come over to the Prometheus
search and we can look up system memory
usage bytes and we can see our data that is being
scraped by Prometheus. A couple things to note.
We can see that the hostname is coming through properly. So in Prometheus we could
filter base name and also that the state is
coming through properly. So Prometheus shows that we
have memory coming through
in different states and we can view the graph
and see how that changes over time. So our
free bytes are pretty low, we have
inactive bytes and then we have used bytes.
So from here we can add
another processor and we can filter out data
points that we may not be interested in.
So this will help us reduce
our resource usage and
that way we don't have to ingest metrics that we
don't care about. So for this demo, I think
I will use the filter processor to
filter out points that are not interesting.
And in this case I'm just going to filter out inactive
memory bytes. So the filter processor
has the configuration options and we're just going to find
an example that looks close to what we're
looking for. So we have some ones that are close here.
We just want to filter out data points based on metric name and
attributes. So we'll define a new processor
filter error
mode ignore since this is just a demo and
we're going to be filtering metrics and we're going to be filtering
by data point. So here we're going to define the
metric name. And note that this will
be in the hotel naming scheme, not Prometheus.
The metrics are converted to format in the exporter,
so when they're coming through the processor, they're still in the
hotel naming scheme. So the metric name
system dot memory dot usage,
and then the state that we're
trying to filter out is inactive.
So we'll filter by state inactive.
So what this does is any data point that is in
the system memory usage metric and has state
inactive will be dropped by the filter processor.
So we will restart the collector
since we're changing the configuration.
Oh, messed up the configuration again.
Oh, inactive needs to be a string.
All right.
Okay, the collector has started successfully again. So let's go
over to Prometheus to see if we're properly
dropping inactive data points.
So we will search again. All right,
so we see here that there's a short break in
metrics where the collector was down. We see
that before when the collector was running, we had used,
we had inactive and we had free memory
bytes. But now that the collector has restarted and we're filtering
out the inactive data points,
we're not seeing those data points anymore. So the filter processor
is working as expected.
So now in our configuration, we've successfully
added a receiver, multiple processors, and an exporter.
We're sending our data to Prometheus. We're filtering out data
points, maybe not useful for us, at least in this
demo, and we are able to view our metrics
in Prometheus. We wanted to share a
more involved example of the power and capabilities
of the collector, so we'll be using the official open telemetry
demo. This will allow us to deploy the collector in a
Kubernetes environment and to configure it and
observe a website built from a microservices
based architecture. And that's what the demo is.
So let's view the architecture.
So the demo is a website
that is a store. So you can browse different
products that are for sale, you can add them to your
cart, you can check out,
add your personal information like your address, credit card,
and you can order. So as we
see here, we have different microservices running that handle different parts
of the website. And as the legend
shows, these microservices are written in quite a
few different languages. We have.net,
c Go, Java,
JavaScript, Python and many others.
So for each of these microservices we
have the languages SDKs
instrument the microservices, either manually or automatically.
So these microservices are instrumented to send their
telemetry to the collector.
So if we scroll down a little bit, we can see that the data is
flowing from the demo to the collector.
And there's a link here to the collector configuration.
It's a bit hard to follow the flow of data
in the collector if we're just looking at the YAML file.
So for the sake of the demo, we can use Otel bin
to visualize the flow of data.
So in our pipelines we can see that we
have traces, metrics and logs.
In our first demo we only had metrics,
so this will be helpful in showing us how other telemetry works as well.
For traces, we see that they are coming
through the OTLP receiver, which is
defined here in the configuration file.
We're able to receive traces over GrPC or HTTP
and then these traces when they're coming from these different microservices
will go through the batch processor and then be sent on to
these exporters. We have the OTLP
exporter, the debug and the span metrics
connector. So the connector
introduces a special use case and
we can look at the spanmetrics
connect or reaDMe to see what it does. But first,
general overview of connectors. Connectors can be used
to connect different pipelines telemetry.
So here we see that the spanmetrics
connector is an exporter in the traces pipeline, but then it's
listed as a receiver in the metrics pipeline.
So the spanmetrics connector is able to
generate metrics from your traces and those metrics will
go through the metrics pipeline.
For more information, we can read the spanmetrics connector readme.
So what it's doing is it's generating metrics
based on the spans that flow through the
collector. It's able to generate request count
metrics and metrics for error counts and the duration
of spans. So this allows you to search
your data in different, different ways.
You can view your traces and then you can view the same
data just formatted differently and compiled
in metric format. So coming
back here to the configuration, we have the metrics pipeline.
We have the HTTP check receiver,
we have the redis receiver and the OTLP receiver as well.
And all of these receivers are going through the batch processor,
which then sends data to the OTLP HTTP
exporter and the debug exporter.
For logs. We have the OTLP receiver,
the batch processor, and the open search and debug
exporters. A couple things to note here.
You're able to use the same component declaration
in multiple pipelines. It's not a problem.
Another thing, the order of
receivers and exporters in your pipeline does not matter,
but the order of your processors does matter.
Your data is flowing through one processor after the next in order.
So the order is very important.
All right, coming back to our architecture from the exporters alone, it's hard
to know where the data will end up going as far as your back
end is concerned. So the architecture shows us what else
the demo is spinning up. And the demo is
spinning up a Prometheus
instance, Grafana and Jaeger as the backends.
So on the left we have metrics, and metrics will be
going to Prometheus and Grafana,
and our traces will be going to Jaeger and grafana.
I've deployed in kubernetes.
There's another help page to show how to do this.
By default, there's a load generator running
in your environment to mimic
what a real load would be for your
website or for the case of this demo, I have turned
it off so that we can see our own usage information.
So let's go to the website and take a look around.
Okay, go shopping. All right, so here
we have the different products available that we can buy.
Comet book looks nice. Pick up a few
of those and let's
see what other items are recommended.
Okay. The optical tube assembly
looks expensive, so let's do that and we'll
pick up a few.
All right, now that our items are in the cart and
we're ready to check out, we'll just confirm
our information, make sure
it all looks good, and we can place our
order. So here we see our order is
complete and everything looks good.
So let's go see the telemetry that the microservices
have generated
from our instrumentation. So we can go to Jaeger.
One view that is helpful. I'm sure other backends have this
as well, but it's helpful to see the flow of
requests through our architecture. So the traces coming
into jaeger are able to generate this view that
show us where a request goes when we're interacting with
the website. So when we go to checkout,
especially, we can see all the different services that the checkout
service depends on. If we go to search,
we can view traces for specific services.
So I'm going to take a look at the front end service and
find traces that have gone through the front end service.
We're looking for the post operation because this will show us the
different durations for checking out
and buying items from our store. So we can see some services
take quite a long time and some
services are very fast. And this
view helps us see another large
value proposition of open telemetry and the collector as
well, and that these microservices are written in very
different technologies, different languages and so
on. But with our instrumentation and sending
traces through the collector, they can all be viewed in
the same format and in the same chart to
look the same. This helps
a lot with being able to reduce our mean time
to resolution for outages and to be able to search information
in a uniform way.
So from here we can go take a look at our Grafana
instance and we see a few dashboards that are
there for us automatically.
So we can take a look at the demo dashboard and we
can see, usually we can see different
data coming in from the span metrics and there's another dashboard for
that. So we'll take a look there. But we can see logs coming from
the ad service. We can see that most are just informational
severity, but we are getting a couple warnings.
So if we're running into any issues, we can see logs
that may point us in the right direction. We can see the
number of each log message and the number of logs
at each severity level. We also have information
around the microservices and how
they're working performance wise.
All right, we're back. We'll take
a look at other dashboards. We can go to the collector dashboard
here. We can see the rate of data coming
through the collector itself. So we can see how many spans
are coming through, how many metrics and logs are coming through.
We can see the general performance of the collector,
we can see how much memory it's using and so on.
And as mentioned earlier, we can come to the spanmetrics demo
dashboard and this will show us the metrics that the spanmetrics
connector is generating. So latency here
shows how much latency there is,
how long the spans take
for each service. On the
top right, we have how many requests per second
each service is getting.
You can see some are used much more than others and then
we have error rates so we can see if any microservice
is hitting a lot of errors.
So that was a short overview of the open
telemetry demo. We saw how to deploy in a
Kubernetes environment with the collector. We saw the
collector's configuration for receiving telemetry of
traces, metrics and logs, and we were able to see how
useful all this data can be in different back ends.
We saw how we can detect outages quicker with
traces and metrics, and we saw different ways
to find the root cause of issues.
We hope this gives more insight into the full potential
and capabilities of opentelemetry and
the collector as well. And we hope this was helpful in
learning how to configure and use the collector.
Thank you. That was a very cool demo. So as
you can see, it's really easy to get started with the opensymmetry collector
and it's extremely flexible and able to handle pretty much any use case
that you can think of today. When it comes to data collection, there's a wide
range of components out there. We didn't have time to cover all of them,
but the basis basic process you follow across the board.
Go find the GitHub repository that it's being hosted in. Go find the component itself.
And there's rich read me information and examples there, even test data that
you can use in order to kind of try out these components. And the components
cover a wide range of environments or use cases today.
So it's very, very feature rich and new ones are being added all the time.
So if you're not seeing something that you need, go ahead and file a GitHub
issue or feel free to come contribute to the project.
We always welcome people to contribute here and kind of expand
upon what is available. The goal is to provide as wide
of coverage as possible. Now, the collector is very, very powerful,
right? Single binary that can be deployed in a variety
of different form factors that allows you to ingest,
process and export your telemetry data. And it really gives you this
vendor agnostic solution very easily can transform data
from one format into another. And that's because under the covers it's using OTLP,
right? The collector converts everything to OTLP and that allows us
to do it in a standard way to perform different processing capabilities.
Before that data is emitted from the collector instance. Now again,
you don't have to use the collector, it's just available there to provide some capabilities
that you may be looking to generically solve for. It can eliminate
some of the proprietary aspects of using a vendor solution here,
and it can even help you consolidate the number of agents or collectors that you're
deploying because it can actually support multiple different signals for you.
Now, if you're looking to learn more, there's plenty of documentation out there.
We kind of showed you some on GitHub, but there's also the opensymmetry I O
site which has some rich getting started information, how to like
bootstrap the collector as an instance in different environments,
as well as taking a look at the demo environment that Curtis was referring to.
So definitely encourage you to kind of take a look at some of the material
out there. And again, feel free to contribute and add more. The more documentation we
can provide, the easier it will be for people to kind of get started.
So thank you so much. We hope you enjoyed this talk. We hope you learned
a few things about how the collector can be used in your environment environment.
And if you have any questions, feel free to ping us on the CNCF
slack instance. Curtis and I are both very active.
Thank you so much and we'll be chatting soon.