All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

As seen in the first example, the expression I've constructed will capture the field values I want. However in the second example, all of the values are not being captured for the field I wish to ext... See more...
As seen in the first example, the expression I've constructed will capture the field values I want. However in the second example, all of the values are not being captured for the field I wish to extract. Why is it capturing everything in the first example, but not the second? Capturing between 1-3 digits followed by 1 of any letter.  
Hello, I am a newbie to splunk and am trying to get a dashboard going to query whois and return information related to security concerns.   I am tracking on my Linux email host authorization failed ... See more...
Hello, I am a newbie to splunk and am trying to get a dashboard going to query whois and return information related to security concerns.   I am tracking on my Linux email host authorization failed messages in syslog.   I would like to take the ip address and determine, to start, where this host is located in the world based upon the asn_country_code.   If I do a query where I use just the IP address it works.  But when I try to use a variable, it fails.  I have tried searching long and hard and cannot find the answer.  When I use the addon app "Network Toolkit" it has a function whois contained, it has many attributes when an ip or domain name is provided as the search string.  One key element is rhost or asn_country_code, which I am currently interested in using.   When I run the query: (index=* OR index=_*) "authentication failure" | eval country_code = [| whois 14.177.64.163 | search attribute=asn_country_code | stats values(value) as country | eval search="\"".mvjoin(country, ",")."\"" ] | table rhost, country_code   I get the following results:   This is just so it works.  I get the ip addresses of the hosts that failed their authentication request.   But the country code is manually entered and i get only the "VN" response, just troubleshooting the query so it works.   I'd like to get the appropriate country code from whois, but when I change the ip address from "14.177.64.162" to rhost, it fails the query. (index=* OR index=_*) "authentication failure" | eval country_code = [| whois rhost | search attribute=asn_country_code | stats values(value) as country | eval search="\"".mvjoin(country, ",")."\"" ] | table rhost, country_code   I get the following: What am I doing wrong with the variable attribute rhost?   Does it need to be in quotations?  (tried) or?   I am at a loss.   Can someone guide me to the right answer? Thank you very much.        
I have http statuses that come in from 2 different indexes, with almost the same event but the event from one indexer has a combination of space and comma as a delimiter and other just has spaces.  ... See more...
I have http statuses that come in from 2 different indexes, with almost the same event but the event from one indexer has a combination of space and comma as a delimiter and other just has spaces.  How do I split the event from the search string such that I get the status from both indexes. I have  | rex field=_raw "^(?:[^\s]*\s){8}(?P<statusCode>\d+)"     but this only check for space, I need to also include comma as a delimiter too
I could use some expert assistance with a regex for breaking down a custom user-agent field in an IIS log into component fields while avoiding a conflict with other fields.  We run software that use... See more...
I could use some expert assistance with a regex for breaking down a custom user-agent field in an IIS log into component fields while avoiding a conflict with other fields.  We run software that uses IIS as a file server, and the software injects a custom user-agent value into the IIS log with every request.  Here is a sample of the user agent:   JTDI+(JDMS+1.0.11.2.20200807;+Win10+10.0;+229.0/62/-1;Branch|UnitType|System|City|ST|SiteIDOverride|SvrType|2.5;C8F7504F064E;UTA-AVD)   The IIS log is space delimited, so all of that lands in the cs_user_agent field just fine.  I made a sort of running mess of extracting the subfields.  Within the string are subfields delimited by semicolon, and sub-subfields delimited by / and |.  Here are my separate extractions, in order as the fields appear in the string:   ^[^\(\n]*JTDI\+\((?P<jkversion>[^;]+) ^[^;\n]*;(?P<os>[^;]+) ^(?:[^;\n]*;){2}(?P<freespace>[^/]+) ^(?:[^;\n]*;){2}\+\d+\.\d+/(?P<pending>\d+) ^(?:[^;\n]*;){3}(?P<SiteDescription>[^;]+) ^(?:[^;\n]*;){4}(?P<MAC>[^;]+) ^(?:[^;\n]*;){5}(?P<cs_hostname>[^\)]+)   Technically after the 'pending' field there should be a 'hits' field (represented by the -1 above), but we don't use it, so I didn't bother extracting it. So my problem is the parentheses.  If a filename shows up in the cs_uri_stem field that includes them, like filename(copy1).txt, the () throw off my jkversion and cs_hostname extractions, because I don't know how to accommodate the possible existence of parentheses outside the cs_user_agent field. So I guess my question is two-fold. 1)  I know my overall user-agent extraction should be a single transform instead of all separate field extractions, but I'm not sure how to tie them all together because I couldn't see a way to extract strings like that in the field extractor interface in Splunk. 2) How can I fix my regex so that parentheses appearing in other fields don't break my jkversion and cs_hostname extractions? Help?
I am using radialGauge to display the metric value and addding thresholds to it. I want to show the threshold values along with the visualization by using CSS, as my organization Splunk team not allo... See more...
I am using radialGauge to display the metric value and addding thresholds to it. I want to show the threshold values along with the visualization by using CSS, as my organization Splunk team not allowing to add JS code from teams perspective development. Or any visualization to show the single value along with its threshold values showing on the board would help. Thanks!    
I'm new to splunk, but I need to figure out how to count the number of error codes of a certain type over a rolling 7 day span going back to the start of the year. so from 01/01/2020 to current date ... See more...
I'm new to splunk, but I need to figure out how to count the number of error codes of a certain type over a rolling 7 day span going back to the start of the year. so from 01/01/2020 to current date of search. broken down into rolling 7 day averages... so if its Wednesday it goes back to Thursday of the previous week, if it thursday it goes back to friday of the previous week and it goes all the way back to the beginning of the year but broken out into 7 day periods.  Is this possible with Splunk?  Error_Code = "x" | timechart count span = 1d is as far as i've gotten  
Hi Splunkers I would like to know if anyone has faced the issue of multiple incidents getting created in ServiceNow for the same entity/issue if the incident is not resolved in ServiceNow and the Sp... See more...
Hi Splunkers I would like to know if anyone has faced the issue of multiple incidents getting created in ServiceNow for the same entity/issue if the incident is not resolved in ServiceNow and the Splunk custom alert generates another ticket for the same issue when it sweeps through the data again and finds the same entity/issue. The correlation id is supposed to prevent that from happening as it tells servicenow that a ticket already exists for that issue but it doesn't seem to be working as multiple incidents are getting created in snow for the same asset. I have added $result.asset_tag$ as the asset_tag is a unique field but that hasn't helped either. Any advice? Thanks, Akriti
How to find a list of hosts that have not reported in, in a week. I tried the following but not producing any results.   ... | eval etime=strptime(time, "%d/%m/%Y"), sevenDaysAgo=relative_time(now(... See more...
How to find a list of hosts that have not reported in, in a week. I tried the following but not producing any results.   ... | eval etime=strptime(time, "%d/%m/%Y"), sevenDaysAgo=relative_time(now(), "-7d") | where etime < sevenDaysAgo
Can I remove an indexer from deployed forwarders' outputs.conf using the deployment server?
How do I find a missing forwarder Monitoring console reports & fixing the issue? I select the item in monitoring console for the missing forwarder but does not indicate what or where? Then if found w... See more...
How do I find a missing forwarder Monitoring console reports & fixing the issue? I select the item in monitoring console for the missing forwarder but does not indicate what or where? Then if found what are the steps to fix or replace the forwarder. Thank u
Our Splunk instance was setup with a super user that has since been deleted. We do not know how to get back to the root for -splunk. How do I tell my Sys Admin to recreate the super user for Splunk?
I'm trying to figure out some rough costs for my setup before moving forward.    I'd like to export logs from CloudWatch into Splunk Cloud, and came across this post: https://www.splunk.com/en_us/... See more...
I'm trying to figure out some rough costs for my setup before moving forward.    I'd like to export logs from CloudWatch into Splunk Cloud, and came across this post: https://www.splunk.com/en_us/blog/tips-and-tricks/how-to-easily-stream-aws-cloudwatch-logs-to-splunk.html   What I wanted to know is that when I'm "streaming" logs to Splunk Cloud from CloudWatch, do I still end up having to pay CloudWatch for the volume of logs ingested? Or is there a way to stream to Splunk Cloud and bypass the need to store/ingest in the CloudWatch backend so that I only have to pay the cost for logs ingested in Splunk Cloud?
Hello- I'm trying to create a vulnerability scan summary. The scan is executed against individual devices, and each discovery of a vulnerability is brought into Splunk as an individual result. For e... See more...
Hello- I'm trying to create a vulnerability scan summary. The scan is executed against individual devices, and each discovery of a vulnerability is brought into Splunk as an individual result. For example, if 10 servers all have the same vulnerability (We'll say ID="10"), is there a way to extract the DNS name from that finding, and place it into a summary column? What I have...3 results, 3 different DNS names with the same vulnerability:     1. dns="i-adfkldslkjkljsdf", vID="10" 2. dns="i-adfkldslkjkljsgg", vID="10" 3. dns="i-adfkldslkjkljsyy", vID="10"     What I'd like to see...a summary table that features all of those DNS names in a single field joined on that vID.     vID="10" vDesc="RCE for Adobe" Assets=""i-adfkldslkjkljsyy","i-adfkldslkjkljsdf","i-adfkldslkjkljsgg""     I think this would be done using a subquery, but for the life of me can't figure it out. Appreciate any assistance you may be able to provide. Thanks!
Hey friends, I am facing a problem with my new dashboard.  My Event: { kubernetes: { container_name: adapter docker_id: a15db0337d70979f0f6e042f5bd609bfe1c42a97472faea56c77924c2ec431... See more...
Hey friends, I am facing a problem with my new dashboard.  My Event: { kubernetes: { container_name: adapter docker_id: a15db0337d70979f0f6e042f5bd609bfe1c42a97472faea56c77924c2ec43158 namespace_name: default pod_name: adapter-767585d989-x5fj7 } log: {"thread":"simpleMessageListenerContainer-2","level":"INFO","loggerName":"adapter.repository.S3Repository","message":"Getting s3object bucket=adapter-test, key=in/1234.json","endOfBatch":true,"loggerFqcn":"org.apache.logging.slf4j.Log4jLogger","instant":{"epochSecond":1615350586,"nanoOfSecond":904000000},"threadId":14,"threadPriority":5,"app_name":"adapter"} log_processed: { app_name: adapter endOfBatch: true instant: { epochSecond: 1615350586 nanoOfSecond: 904000000 } level: INFO loggerFqcn: org.apache.logging.slf4j.Log4jLogger loggerName: adapter.repository.S3Repository message: Getting s3object bucket=adapter-test, key=in/1234.json thread: simpleMessageListenerContainer-2 threadId: 14 threadPriority: 5 } stream: stdout time: 2021-03-10T04:29:46.90639262Z } When i create a dashboard out of this event, I get 2 entries on each cell of the dashboard. My dashboard query is as below index=logs sourcetype=test | rename log_processed.app_name as AppName | rename log_processed.loggerName as LoggerName | rename log_processed.level as LogLevel | rename log_processed.message as Message | rename kubernetes.pod_name as EKS_POD | table _time, AppName, LoggerName, LogLevel, Message, EKS_POD | where Message!="" Output is like: Ideally the output should be : Could you please help on this.   
Hello, I am trying to collect stats per hour using a data model for a absolute time range that starts 30 minutes past the hour. The query looks something like: |tstats count, sum(X), sum(Y) FROM da... See more...
Hello, I am trying to collect stats per hour using a data model for a absolute time range that starts 30 minutes past the hour. The query looks something like: |tstats count, sum(X), sum(Y) FROM datamodel=ZModel BY _time span=1h I choose a time range using the Date & Time Range picker, but the range starts at 30 minutes past the hour. So say something like Jan 1 16:30 to Jan 2 16:30. The problem I have is that the time 'buckets' in the result snap to the hour, and so the hourly ranges are like 16:00 - 17:00, 17:00 - 18:00 and so forth rather than 16:30 - 17:30, 17:30 - 18:30 and so forth. Is there anyway to make the time buckets start off relative to the start time specified rather than snap to the hour? I tried using earliest= latest= instead of using the Date & Time Range picker, but that didn't help either.
When we change the time span in a dashboard for this report, the counting of the values changes after 15 hours. The stats values go from a 1 minute span to a 5 minute span. I believe we need a way to... See more...
When we change the time span in a dashboard for this report, the counting of the values changes after 15 hours. The stats values go from a 1 minute span to a 5 minute span. I believe we need a way to use a "Variable" for the time and counting section in bold below.   sourcetype=ib:ddns index=ib_dns | rex field=REST "'(?<ZONE>[^ ]+)/IN'" | eval TYPE=if(isnull(TYPEA), case(match(REST, "updating zone '[^ ]+/IN': adding an RR at") OR match(REST, "updating zone '[^ ]+/IN': delet"), "Success", match(REST, "update '[^ ]+/IN' denied"), "Reject", match(REST, "updating zone '[^ ]+/IN': update unsuccessful.*prerequisite not satisfied \([NY]XDOMAIN\)"), "PrerequisiteReject", match(REST, "updating zone '[^ ]+/IN': update failed"), "Failure"), TYPEA) | eval VIEW=if(isnull(VIEW),"_default",replace(VIEW,"view (\d+)","\1")) | lookup dns_viewkey_displayname_lookup VIEW output display_name | bucket span=1m _time | stats count by _time TYPE | timechart bins=1000 eval(avg(count)/60) by TYPE | interpolate 120 | eval Success=if(isnull(Success),0,Success) | eval Failure=if(isnull(Failure),0,Failure) | eval Reject=if(isnull(Reject),0,Reject) | eval PrerequisiteReject=if(isnull(PrerequisiteReject),0,PrerequisiteReject) | rename PrerequisiteReject as "Prerequisite Reject"
New to splunk dashboards - sorry for dumb question - I have a dashboard with  a Line Chart Alarm Panel  & below that a Chart Alarm Count Panel and an Event Panel How do i get the Chart Alarm Count ... See more...
New to splunk dashboards - sorry for dumb question - I have a dashboard with  a Line Chart Alarm Panel  & below that a Chart Alarm Count Panel and an Event Panel How do i get the Chart Alarm Count Panel Event Panel to update when i zoom in on the Line Chart Panel?  (the Chart Count Panel & Event panel don't update from the Zoom, it has the initial Chart Panel info) Thanks for pointing me in the right direction! 
I have a lot of json data that contains periods in the keys. I want to be able to expand one of the arrays in the data with the spath command.  It does not seem to work with a period in the json data... See more...
I have a lot of json data that contains periods in the keys. I want to be able to expand one of the arrays in the data with the spath command.  It does not seem to work with a period in the json data in the simple example below: | makeresults | eval _raw=" { \"content\":{ \"jvm.memory\": [{\"num\":1.0},{\"num\":2.0}] } }" | spath | spath path=content.jvm.memory{} output=event_data | mvexpand event_data | eval _raw=event_data | kv The following query does work with an underscore in the key name. | makeresults | eval _raw=" { \"content\":{ \"jvm_memory\": [{\"num\":1.0},{\"num\":2.0}] } }" | spath | spath path=content.jvm_memory{} output=event_data | mvexpand event_data | eval _raw=event_data | kv Are there any ways to work around the periods in the keys? Maybe some sort of mass replace of the periods in the key names only (not the values) or some sort of way to escape the periods in the spath command?
Is it possible to change format time for the column "Receipt Time" in "Incident Review"? Currently I see this time in format like this: 3/10/21 4:00:47.000 PM I would like to change displaying it... See more...
Is it possible to change format time for the column "Receipt Time" in "Incident Review"? Currently I see this time in format like this: 3/10/21 4:00:47.000 PM I would like to change displaying it to 24h format. I don't want change the format _time from logs, only _time from "Incident Review" which shows time of notable event. Thank you for the help.
Hi, I am trying to use Telegraf to send data to Splunk HEC. However not sure how to get past the certificate issue. The error is       Error writing to outputs.http: Post "https://prd-p-9pr... See more...
Hi, I am trying to use Telegraf to send data to Splunk HEC. However not sure how to get past the certificate issue. The error is       Error writing to outputs.http: Post "https://prd-p-9prd1.splunkcloud.com:8088/services/collector": x509: certificate is not valid for any names, but wanted to match prd-p-9prd1.splunkcloud.com         Telegraf Configuration:       [global_tags] index="vault_telemetry" datacenter = "us-east-1" role = "vault-server" cluster = "vault" [agent] interval = "60s" round_interval = true metric_batch_size = 1000 metric_buffer_limit = 10000 collection_jitter = "0s" flush_interval = "10s" flush_jitter = "0s" precision = "" hostname = "" omit_hostname = false [[inputs.statsd]] protocol = "udp" service_address = ":8125" metric_separator = "." datadog_extensions = true [[outputs.http]] url = "https://prd-p-9prd1.splunkcloud.com:8088/services/collector" data_format = "splunkmetric" splunkmetric_hec_routing = true [outputs.http.headers] Content-Type = "application/json" Authorization = "Splunk f76599e2-77a5-xxxx-xxxx-b5af6d97xxxx" X-Splunk-Request-Channel = "f76599e2-77a5-xxxx-xxxx-b5af6d97xxxx"         Thank you