All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We are running the latest update for Splunk Enterprise Security, which includes the new "Cloud Security" option., In Cloud Security, I can see some data when using the "Microsoft 365 Security Option... See more...
We are running the latest update for Splunk Enterprise Security, which includes the new "Cloud Security" option., In Cloud Security, I can see some data when using the "Microsoft 365 Security Option". However, no data is shown for the following options: Security Groups IAM Activity Network ACLs Access Analyzer Is there some configuration that I have missed? Thanks. Steve Rogers
We are running in Splunk Cloud and have configured the "Splunk Add-On for Microsoft Cloud Services" based on the provided configuration documentation. I am trying to use the Microsoft Azure App for... See more...
We are running in Splunk Cloud and have configured the "Splunk Add-On for Microsoft Cloud Services" based on the provided configuration documentation. I am trying to use the Microsoft Azure App for Splunk to view Azure data (which I presumed would be pulled in by the "Splunk Add-On for Microsoft Cloud Services", but the Microsoft Azure App for Splunk shows no data at all. I have verified the Add-on configuration, but still not seeing any data?  Does anyone have this app working and displaying results? Best regards, Steve Rogers    
Hello,   I need to build a search where I can subtract a token from the previous value in a row. Example I know how to get the first count (800) which is simply calculated through a query I alr... See more...
Hello,   I need to build a search where I can subtract a token from the previous value in a row. Example I know how to get the first count (800) which is simply calculated through a query I already have. I do not know how to get the token to subtract from the value of the cell right above. Does anyone who how to write this into Splunk query logic that can compute these values? _time Count Notes 05:00 800 Saved token = 100 05:05 700 800-100 05:10 600 700-100
Looking for the new location to check for Splunk patches, it used to be here - https://www.splunk.com/en_us/product-security/announcements-archive.html We are required to check for any new patches m... See more...
Looking for the new location to check for Splunk patches, it used to be here - https://www.splunk.com/en_us/product-security/announcements-archive.html We are required to check for any new patches monthly and the location has moved.
I'm trying to create a table of availabilities (percent uptime) for a given service for a set of hosts.  My desired output is a simple 2-column table of "Host" and "Availability (%)", like the one be... See more...
I'm trying to create a table of availabilities (percent uptime) for a given service for a set of hosts.  My desired output is a simple 2-column table of "Host" and "Availability (%)", like the one below: Host Availability my-db-1 100% my-db-2 97.5% my-db-3 100% my-db-4 72.2% rhnsd Availabilities I have a query I currently use to get just availability of a service for a single host, but I'd like to scale it larger to create the above output.  It assumes ps.sh is running every 1800 seconds and uses the number of events it finds over a give time period (info_max_time-info_min_time) and divides that by the total number of 1800 second intervals that can fit in the given time period, along with some conditions if no host matches or if the availability is >100.  That query is as follows:   index=os host="my-db-1.mydomain.net" sourcetype=ps rhnsd | stats count, distinct_count(host) as hostcount | addinfo | eval availability=if(hostcount=0,0,if(count>=(info_max_time-info_min_time)/1800,100,count/((info_max_time-info_min_time)/1800))*100) | table availability    Or if there's a much easier way to accomplish this that I don't know about, I'm all ears.  Any help is greatly appreciated.
I am new to Splunk and I am trying to parse an Aide scan log file to display each line. Currently, Splunk just reads all the lines as a single event.  I know I may have to build a regex once I have S... See more...
I am new to Splunk and I am trying to parse an Aide scan log file to display each line. Currently, Splunk just reads all the lines as a single event.  I know I may have to build a regex once I have Splunk reading the file correctly, but currently Splunk isn't extracting the events by the newline character. Sample data below:     How can I get Splunk to parse each line vs reading the entire file as a single event?    
Hello, Looking for a way to partially join 2 inputlookups. Lookup 1: username, name jsmith, John jdoe, Joe Lookup 2:username,status jsmith-sa, enabled I would like to return a match on j... See more...
Hello, Looking for a way to partially join 2 inputlookups. Lookup 1: username, name jsmith, John jdoe, Joe Lookup 2:username,status jsmith-sa, enabled I would like to return a match on jsmith to jsmith-sa but have not been able to figure out how to partially join.  ie search for jsmith* against lookup2 not for exact matches.  The 2nd lookup may have the entire keyword or keyword-something  Search returns: jsmith,jsmith-sa,enabled
Hello, We are configuring the Splunk add-on for Microsoft Cloud Services.  Is there a corresponding Splunk app for visualization of the data that is ingested by this add-on?   Steve Rogers
I am new to Splunk and need some serious practice to learn all the cool things Splunk can do. I am trying to load the BOTSV1 JSON dataset into my lab environment so I can start learning the basics of... See more...
I am new to Splunk and need some serious practice to learn all the cool things Splunk can do. I am trying to load the BOTSV1 JSON dataset into my lab environment so I can start learning the basics of SPL. According to the comments in GitHub this dataset is 120GB uncompressed. This brings up the following two issues. 1) The Splunk web file importer will only load files up to 500MB. How am I supposed to load a 120GB file? 2) The Splunk development license that I received is limited to 10GB, so how am I supposed to load this 120GB file once question #1 is resolved? I am sure I am not the only one encountering this issue, so forgive me for asking a question that has probably already been answered numerous time. 
 I have created a lookup table with filename and cutofftime within which we have to receive the file. I have to compare Cutofftime to check if its falling within 30 mins of current time and retrieve ... See more...
 I have created a lookup table with filename and cutofftime within which we have to receive the file. I have to compare Cutofftime to check if its falling within 30 mins of current time and retrieve that particular file name from lookup  and search for it. Please help me with query
I have changed the certificate on server.conf to take my created cert for port 8089. This is the same cert that I have been using for port 8000. I change server.conf according to documentation, reboo... See more...
I have changed the certificate on server.conf to take my created cert for port 8089. This is the same cert that I have been using for port 8000. I change server.conf according to documentation, reboot and verify the certificate. Port 8089 is pointing to the new cert. However, when trying to sign in to port 8000, I get the following error. Login failed due to incorrectly configured Multifactor authentication. Contact Splunk support for resolving this issue   PLease advise, we are using DUO.  This is what my sslConfig portion looks like in my server.conf [sslConfig] sslPassword = ****************************************************************************** privKeyPath = C:\Program Files\Splunk\etc\auth\mycerts\*****************.key serverCert = C:\Program Files\Splunk\etc\auth\mycerts\*****************.pem sslRootCAPath = C:\Program Files\Splunk\etc\auth\mycerts\************.pem requireClientCert = False
Hi, I have a field name VULN in index=ABC sourcetype=XYZ. We need to know, if new VULN show up in 48hrs of data compared to 1 month ago. Basically, we need to see how many new VULNs are in data c... See more...
Hi, I have a field name VULN in index=ABC sourcetype=XYZ. We need to know, if new VULN show up in 48hrs of data compared to 1 month ago. Basically, we need to see how many new VULNs are in data compared to last month and how many unique IPs are affected.  Thanks in-advance!!!
Gentlemen, We are using  https://splunkbase.splunk.com/app/1914/  Splunk is not extracting all the fields visible in the Windows Sysmon events. It leaves out lot of fields.    This is what i sus... See more...
Gentlemen, We are using  https://splunkbase.splunk.com/app/1914/  Splunk is not extracting all the fields visible in the Windows Sysmon events. It leaves out lot of fields.    This is what i suspect is the cause, but need someone to advise if i am on the right track. In the events, the SourceType shows as: WinEventLog:Microsoft-Windows-Sysmon/Operational   However, On my Search Head when i go to Settings >> SourceTypes >> ALL , i see a different name:   XmlWinEventLog:Microsoft-Windows-Sysmon/Operational There is no source type here by the name of WinEventLog:Microsoft-Windows-Sysmon/Operational   Is the conflict of different sourcetype names causing the issue ?  What needs to be done to fix ? Things i tried... 1.  Added the following in inputs.conf to make it format as per XML   [WinEventLog://Microsoft-Windows-Sysmon/Operational] renderXML=true sourcetype=XMLWinEventLog:Microsoft-Windows-Sysmon/Operational   This did extract all the fields but ended up showing the  events in XML format .  How can i keep the default /original format of displaying Windows events yet make it extract all the fields ? Thanks all
Hello, I'm attempting to set the panel back ground color to transparent witin a couple Choropleth panels in Dashboard Studio. However, nothing seems to work... I have attempted to set the backgroun... See more...
Hello, I'm attempting to set the panel back ground color to transparent witin a couple Choropleth panels in Dashboard Studio. However, nothing seems to work... I have attempted to set the background using the following: "transparent": true,  "backgroundColor": "rgba(0, 0, 0, 0)", "backgroundColor": "transparent", I can successfully change the color to white or other colors. but not transparent.  Thanks in advance!
Hello all, I am facing issue in collecting data from two of the hosts.e are using rsyslog to injest data. Logs are getting updated in the logdump of the HF but im not able to see the logs in splunk. ... See more...
Hello all, I am facing issue in collecting data from two of the hosts.e are using rsyslog to injest data. Logs are getting updated in the logdump of the HF but im not able to see the logs in splunk. We can see logs from other hosts , but having issues with two particular hosts with high log volume. I dont see any error/warning related to queueing. While checking the status of rsyslog service, we can see the below errors. invalid or yet-unknown config file command 'TCPServerAddress' - have you forgotten to load a module? [v8.24.0-57.el7_9 try http://www.rsyslog.com/e/3003 ] Could not create tcp listener, ignoring port 515 bind-address (null). [v8.24.0-57.el7_9 try http://www.rsyslog.com/e/2077 ] module 'imtcp.so' already in this config, cannot be added [v8.24.0-57.el7_9 try http://www.rsyslog.com/e/2221 ] Any suggestions/feedback is welcomes. Thanks
Hello All , What is the best way to collect and monitor system health and performance metrics from various security devices and endpoint devices. Log collection will be SNMP based or API Please r... See more...
Hello All , What is the best way to collect and monitor system health and performance metrics from various security devices and endpoint devices. Log collection will be SNMP based or API Please recommend any Add-on or App if available for infrastructure health monitoring.  Is Splunk ITSI recommended for this requirement   TIA
We recently updated our Splunk App to use the latest SDK, and since then, we've been running into this issue where our custom configuration page (where users enter an API key for our service) fails t... See more...
We recently updated our Splunk App to use the latest SDK, and since then, we've been running into this issue where our custom configuration page (where users enter an API key for our service) fails to load with the below error.  The error seems to indicate that the issue is with the field having no value at the initial install (which has always been the default state).  Any suggestions on how we can address this?   {"messages":[{"type":"ERROR","text":"Unexpected error \"<class 'splunktaucclib.rest_handler.error.RestError'>\" from python handler: \"REST Error [500]: Internal Server Error -- Traceback (most recent call last):\n File \"/opt/splunk/etc/apps/SA-GreyNoise/bin/SA_GreyNoise/splunktaucclib/rest_handler/handler.py\", line 124, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File \"/opt/splunk/etc/apps/SA-GreyNoise/bin/SA_GreyNoise/splunktaucclib/rest_handler/handler.py\", line 303, in _format_response\n masked = self.rest_credentials.decrypt_for_get(name, data)\n File \"/opt/splunk/etc/apps/SA-GreyNoise/bin/SA_GreyNoise/splunktaucclib/rest_handler/credentials.py\", line 203, in decrypt_for_get\n data[field_name] = clear_password[field_name]\nTypeError: 'NoneType' object is not subscriptable\n\". See splunkd.log/python.log for more details."}]}  
Hi, I need to create a chart to show top categories per time. At the moment, the timechart I am getting is placing the time axis on the y-axis and the categories on the x-axis. In addition, ther... See more...
Hi, I need to create a chart to show top categories per time. At the moment, the timechart I am getting is placing the time axis on the y-axis and the categories on the x-axis. In addition, there are over 50 possible categories and the user should be seeing the top 20 categories with the respective count for each time period. So I would imagine 20 line of the graph, no? Here is what I currently have:   Can you please help? Many thanks, Patrick
Hello, We have several cases, where we relate the data between panels. On the example screenshots below, we have: 1/ Chart with the number of database threads in time, and the sum of threads per ... See more...
Hello, We have several cases, where we relate the data between panels. On the example screenshots below, we have: 1/ Chart with the number of database threads in time, and the sum of threads per time unit involved in the execution of the particular SQL statement (SQL hash) - represented by the different colors:    2/ Pie chart showing the portion of the particular SQL statement / hash in the given time span: Is there any easy way to keep the colors for the same SQL statements/hashes between the two panels? Kind Regards, Kamil
I am trying to create a report that will show month over month reporting for web service average response time as a percentage against a threshold sourcetype="web_logs" `web_resp_index` * | bu... See more...
I am trying to create a report that will show month over month reporting for web service average response time as a percentage against a threshold sourcetype="web_logs" `web_resp_index` * | bucket _time span=month | stats count as total_count count(eval(resp_time>=500)) as fail_count count(eval(resp_time<500)) as success_count count(eval(resp_time=="")) as null_count by source _time | eval success_percent=round((resp_count/total_count)*100,2) | eval _time=strftime(_time, "%b") | Fields - total_count fail_count success_count null_count I now have : source _time success_percent www1 Jan 94.6 www1 Feb 93.2 www1 Mar 94.3 www2 Jan 98.5 www2 Feb 92.4 www2 Mar 84   I am looking to transpose and group so that I have 1 row per source and monthly columns Source Jan Feb Mar www1 94.6 93.2 94.3 www2 98.5 92.4 84