Commonly Asked Demo Questions Flashcards
How big is the Datadog agent? What is the agent overhead?
In terms of resource consumption the Datadog agent consumes roughly:
Resident memory (actual RAM used): 50MB
CPU Runtime: less than 1% of averaged runtime
Disk: (Linux 120MB, Windows: 60MB)
Network: 10-50 KB of bandwidth per minute
*The stats listed above are based on an EC2 m1.large instance running for 10+ days.
For APM, resource consumption can vary a lot depending on actual usage. For minimal usage, the order of magnitude is similar to the numbers listed above.
Max total trace size: 10MB
Objection: But you don’t support on-prem!
“Actually, it’s a common misconception that we don’t support on-premise use cases. Many of our customers, including one of the largest healthcare platforms, used Datadog to monitor their migration from a mostly on-prem environment to the cloud.”
Do we have to use the agent?
The agent is not necessary, but, it is highly recommended
For starters, the OS metrics that you get are a very helpful baseline for your servers.
You also get more metrics (versus Cloudwatch for example)
Also, some of our integrations require the agent to collect metrics.
To enable APM, the agent + client (this refers to the programming language - python, ruby, go) is required (Linux)
How do you install the agent with Chef/Puppet?
We have a cookbook for Chef and a module for Puppet
How does Datadog monitor containers?
We monitor containers no matter how frequently they are created or destroyed. We also monitor orchestration tools like Kubernetes, Mesos, or Amazon ECS
We have what we call, Autodiscovery - which auto-discovers containers and apps running on them. So we can automatically detect and monitor any changes to your containerized environment and always have visibility into the apps running on them
Can you run a version of Datadog on-premise?
The agent is what you would install on-premise to collect metrics/events from your local environment and send it to your Datadog account
I’m concerned about security issues related to sending data to an external 3rd party
The data collected by default is very benign, and they can audit our Agent’s code via our Github account to verify everything that it does.
I’m concerned about security issues related to having more places of connections from my infrastructure to the internet.
You can set up the agent to report metrics via Proxy to reduce the number of hosts that actually have a connection to the internet. Our backend hosts all the data and the DataDog UI queries it.
How do you transmit information over the internet?
All data transmitted between Datadog and Datadog users is protected using Transport Layer Security (TLS) and HTTP Strict Transport Security (HSTS).
How is data encrypted in-flight/at rest?
Data is encrypted at rest using secure symmetric cipher - AES.
How is the data stored?
We’re stored entirely on AWS, and rely on their security for storage. If they ask if we’re multi-region - the answer is no. We’re on East.
What security certifications do you have?
CSA STAR and AICPA SOC
What metrics are collected for each integration? How are they decided?
Our development team undergoes an analysis of which metrics are noteworthy to collect, and then programs them into each integration. We list which metrics are being collected for each integration on our Docs page.
How can we get custom metrics in?
We have a StatsD handler called DogStatsD as part of the agent if you are using StatsD. We also have a REST-ful API
How many custom metrics can you take?
We can take an unlimited number of custom metrics. Included in our Pro pricing is 100 custom metrics per host. If you will have more, we work out pricing based on the additional volume. However, we are able to cut down typically on large amounts of custom metrics by combining metrics with tags. Pricing for Lambda falls under custom metrics pricing