All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have two lists: one has a list of hostnames and another has a list of prefixes to hostnames. I would like to create a search with an output showing hosts that do not have a name containing  any of ... See more...
I have two lists: one has a list of hostnames and another has a list of prefixes to hostnames. I would like to create a search with an output showing hosts that do not have a name containing  any of the prefixes in the second list.  Example:  Inputlookup                                         Lookup Hostname                                             Hostname Prefix appletown                                             town treeville                                                   tree I would like to create a search showing a list of hostnames from the first list that do not contain any of the hostnames in the second. 
Splunkbase is showing different versions of apps for different folks. For instance the Splunk ThreatConnect App is on version 1.0.5 in my version of Splunbase. However my colleague only sees version ... See more...
Splunkbase is showing different versions of apps for different folks. For instance the Splunk ThreatConnect App is on version 1.0.5 in my version of Splunbase. However my colleague only sees version 1.0.4 for the same app. Can someone from support provide information on why that is?
Hi folks, is there a way to enable SSL cert validation to a 'httpout' stanza within 'outputs.conf' like we can do with 'sslVerifyServerCert', etc. for 'tcpout' stanzas? I wasn't able to find a s... See more...
Hi folks, is there a way to enable SSL cert validation to a 'httpout' stanza within 'outputs.conf' like we can do with 'sslVerifyServerCert', etc. for 'tcpout' stanzas? I wasn't able to find a solution for this. The UF doesn't seem to verify the CA provided in 'server.conf/sslConfig/caCertFile' as well. Thanks for your help!
I have to set up an alert with corn schedule from 9pm to 7am. when i tried to get it from crontab it was not coming. how can i create the corn schedule  for this timings.
I am writing a playbook that loops through a list of case ids to delete. This action fails after hitting 50 for the action and I have written in my playbook to continue if there are still valid ids l... See more...
I am writing a playbook that loops through a list of case ids to delete. This action fails after hitting 50 for the action and I have written in my playbook to continue if there are still valid ids left in the list. However, after the failed it won't continue down to the code to what I have written down below to remediate. How can I get the playbook to continue after a failed action and just mark it failed rather than exiting the playbook?
Hi, I am unable to use strptime() here correctly. My code is: index="ABC" | eval time=strptime(_time, "%Y-%m-%dT%H:%M:%S") | bin time span=15m | table time But the table has no output ...... See more...
Hi, I am unable to use strptime() here correctly. My code is: index="ABC" | eval time=strptime(_time, "%Y-%m-%dT%H:%M:%S") | bin time span=15m | table time But the table has no output ..... Can you please help? Thanks!
I am trying to get a report of all distribution groups in splunk, what they send say in a 12 month period.  We dont have Exchange Application add in and my understanding is that it no longer really e... See more...
I am trying to get a report of all distribution groups in splunk, what they send say in a 12 month period.  We dont have Exchange Application add in and my understanding is that it no longer really exists. Is there a way to identify distribution groups in the search?  I have access to the index and I can find distribution groups but I cant just look for DL's them selves. Thanks in Advance
Hi, I'm trying to maintain a users very first login using lookup and the scheduler runs twice in a day. The userLogin field is a combination of username, userId and uniqueId associated to each Logi... See more...
Hi, I'm trying to maintain a users very first login using lookup and the scheduler runs twice in a day. The userLogin field is a combination of username, userId and uniqueId associated to each Loginuser. I just want the username and userId from userLogin field to maintain single record of each user but to know the exact logged in user records I've to display the userLogin field as well and have to maintain the user's earliest login dateTime. Currently i'm using csv lookup and has records of past three months in the lookup file but in future if I expand for past 6 months I've to update the earliest login dateTime for the existing user from lookup and append new user details with their login dateTime. I'm bit worried about the performance if my records goes higher in the lookup file. Here's the query i've managed to write, but I'm struggling to track the earliest dateTime. Any suggestions would be highly welcomed. Thanks in advance.   index=user_login_details | rex field=userLogin "(?<userName>\s+\d{5}).*" | dedup userName | eval Time=strftime(_time,"%Y-%m-%dT%H:%M:%S") | table userName, userLogin, Time | inputlookup user_details.csv append=true | dedup userName | outputlookup user_details.csv append=true
Is there a way to get logs in JSON format for an API call from a Springboot Application?
Hello, Our use case is to add a viz which is a URL for an interactive map. New Jersey County Map  This map should be displayed on the dashboard at all times. When a user clicks on any County, a da... See more...
Hello, Our use case is to add a viz which is a URL for an interactive map. New Jersey County Map  This map should be displayed on the dashboard at all times. When a user clicks on any County, a data table will open in the lower left of the window/panel/viz. Tried inserting it as an image, and changed the domain for images in the web.conf. dashboards_csp_allowed_domains = *.njogis-newjersey.opendata.arcgis.com But since it is not really an image, but an image rendered on a webpage, this didn't work - received this error. With Classic DB we used I-Frame on occasion. But this was kludgy at best. We haven't worked with REST API. But could that be a possible solution? Thanks in advance and God bless, Genesius
Hi, My overall goal is to create a resulting data table with headings including HourOfDay, BucketMinuteOfHour, DayOfWeek, and source, as well as creating an upperBound and lowerBound. My curr... See more...
Hi, My overall goal is to create a resulting data table with headings including HourOfDay, BucketMinuteOfHour, DayOfWeek, and source, as well as creating an upperBound and lowerBound. My current query is as follows: index="akamai" sourcetype=akamaisiem | eval time = _time | eval time=strptime(time, "%Y-%m-%dT%H:%M:%S") | bin time span=15m | eval HourOfDay=strftime(time, "%H") | eval BucketMinuteOfHour=strftime(time, "%M") | eval DayOfWeek=strftime(time, "%A") | stats avg(count) as avg stdev(count) as stdev by HourOfDay,BucketMinuteOfHour,DayOfWeek,source | eval lowerBound=(avg-stdev*exact(2)), upperBound=(avg+stdev*exact(2)) | fields lowerBound,upperBound,HourOfDay,BucketMinuteOfHour,DayOfWeek,source | outputlookup state.csv However, it produces zero results. Can you please help? I am using the following article as a guide as this is for an anomaly detection project: https://www.splunk.com/en_us/blog/platform/cyclical-statistical-forecasts-and-anomalies-part-1.html I appreciate any help. tHANKS!
Just starting out with provisioning splunk 9.x via AWS AMI and Terraform.  Does anyone have any idea if it is possible to change the admin pwd using a user_data script on the AMI?  I found one mentio... See more...
Just starting out with provisioning splunk 9.x via AWS AMI and Terraform.  Does anyone have any idea if it is possible to change the admin pwd using a user_data script on the AMI?  I found one mention  of using export password="<password>" but that didn't seem to work; it still used the default SPLUNK-$instance id$ We would like to have the pwd changed during provisioning, if possible.  Thanks.
Hi, I am running the following query to check seasonality in my index: index="ABC | timechart count by _time | timechart Error in 'timechart' command: Repeated group-by field '_time'. The sea... See more...
Hi, I am running the following query to check seasonality in my index: index="ABC | timechart count by _time | timechart Error in 'timechart' command: Repeated group-by field '_time'. The search job has failed due to an error. You may be able view the job in the Job Inspector.However, I am receiving the following error and I do not understand it at all: Can you please help? Many thanks!
Hi, I am writing a query here to calculate the expected frequency of data in an index : index=ABC | eval time_diff=_time-lag(_time) | stats avg(time_diff) as avg_time_diff   However, when ... See more...
Hi, I am writing a query here to calculate the expected frequency of data in an index : index=ABC | eval time_diff=_time-lag(_time) | stats avg(time_diff) as avg_time_diff   However, when I try and run it, I receive the following error message:   Error in 'eval' command: The 'lag' function is unsupported or undefined. The search job has failed due to an error. You may be able view the job in the Job Inspector.   Can you please help?
Hi, I'm trying to create a correlation search in splunk unable to figure out options Time range  earliest time/latest time/ cron shedule could any one explain from scratch  what if i want to shedu... See more...
Hi, I'm trying to create a correlation search in splunk unable to figure out options Time range  earliest time/latest time/ cron shedule could any one explain from scratch  what if i want to shedule a search earliest time to 1h30min what i have to mention  thanks
Hi All, Good day, I have juniper data in Splunk using sourcetype = juniper* but need some searches to create dashboards which are useful for juniper team to check if any down or outage  could y... See more...
Hi All, Good day, I have juniper data in Splunk using sourcetype = juniper* but need some searches to create dashboards which are useful for juniper team to check if any down or outage  could you please tell me any  searches for this juniper dashboard 
I have a user table which shows which department each user belongs to. I want to join this with another table on User so i can get the respective department for each user. However, I would like to ha... See more...
I have a user table which shows which department each user belongs to. I want to join this with another table on User so i can get the respective department for each user. However, I would like to have the headcount of each department showing as well. The below code doesn't work but if it makes sense, i would like to achieve something like that     index=... | join type=left user [| inputlookup lookup | rename cn as user | stats count(user) as headcount by department] | table logon_time user department headcount          
I have a lookup which in column A is the index and column B is the number of hosts, I have this as  a lookup. I would like to be able to query the number of hosts per index I have i.e. if I have thre... See more...
I have a lookup which in column A is the index and column B is the number of hosts, I have this as  a lookup. I would like to be able to query the number of hosts per index I have i.e. if I have three hosts in my lookup but splunk returns two I would like to see that number. Probably a difficult query but one I am struggling with - thanks in advance!
Currently my Heavy Forwarder is receiving unwanted logs from a lot of different devices, and it is taking up a lot of space. Is there a way to prevent / reject logs from all servers, and manually a... See more...
Currently my Heavy Forwarder is receiving unwanted logs from a lot of different devices, and it is taking up a lot of space. Is there a way to prevent / reject logs from all servers, and manually add server logs that we want to monitor into whitelist, so that it doesn't take up free space in my Heavy Forwarder.   So basically, reject all logs from all server. Accept logs only from server1 and server2.    Thank you in advance.
index=na160 starttime="02/02/2023:00:00:00" endtime="02/02/2023:24:00:00" requestId="TID:131610985000004c2d" |stats count as 240_COUNT by logRecordType | join logRecordType type=outer [search inde... See more...
index=na160 starttime="02/02/2023:00:00:00" endtime="02/02/2023:24:00:00" requestId="TID:131610985000004c2d" |stats count as 240_COUNT by logRecordType | join logRecordType type=outer [search index=na160 starttime="02/08/2023:00:00:00" endtime="02/08/2023:24:00:00" requestId="TID:348627200000212ea7" | stats count as 242_COUNT by logRecordType] | eval difference = (242_COUNT - 240_COUNT) | table logRecordType, 240_COUNT, 242_COUNT, difference Above eval fails after joining two dataset  Error in 'eval' command: The expression is malformed.  Appreciate your help here to mitigate this issue.