All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I've tried a few methods shared here to adjust the start/end times of span. Mainly: 1 -    | eval _time=_time-3600 | bin _time span=4h | eval _time=_time+3600   2 -   | timechart span=4h align... See more...
I've tried a few methods shared here to adjust the start/end times of span. Mainly: 1 -    | eval _time=_time-3600 | bin _time span=4h | eval _time=_time+3600   2 -   | timechart span=4h aligntime=@h-120m   However after testing, neither of these is actually offsetting the span. It only changes the times shown in the resulting table. The values (in my case counts) in each box do not change, just the _time values. Am I doing something wrong? For example: _time A B C 1/28 00:00 2 1 2 1/28 04:00 4 2 4 1/28 08:00 6 3 6 1/28 12:00 8 4 8 1/28 16:00 10 5 10   _time A B C 1/27 22:00 2 1 2 1/28 02:00 4 2 4 1/28 06:00 6 3 6 1/28 10:00 8 4 8 1/28 14:00 10 5 10
Greetings,  Are there any official AWS CFT Templates to create necessary roles, SNS/SQS Services to use Splunk Add on for AWS to ingest Cloudtrail Data into Splunk? 
On the IDX's server.conf you need to add this line in the [sslConfig] stanza: serverCert = /opt/splunk/etc/auth/mycerts/myCombinedServerCertificate.pem Then delete the sslPassword line from your se... See more...
On the IDX's server.conf you need to add this line in the [sslConfig] stanza: serverCert = /opt/splunk/etc/auth/mycerts/myCombinedServerCertificate.pem Then delete the sslPassword line from your server.conf if it's the default, Splunk will recreate it anyhow. That should fix it unless your cert is not prepared properly with just leaf cert + private key in the 'myCombinedServerCertificate.pem' __PRESENT __PRESENT
Hi. Can I react router nested route in Splunk ui toolkit ? Overlapping routing with react-router results in an error page upon reload.
Probably proxy server (or firewall). Typically proxy server because the ODBC driver is not proxy aware like your internet browser is. This is why others mentioned configure the proxy without explan... See more...
Probably proxy server (or firewall). Typically proxy server because the ODBC driver is not proxy aware like your internet browser is. This is why others mentioned configure the proxy without explanation. The typical solution is to add an environmental variable that the ODBC driver can see. I would recommend testing it using the following process: At the command line (in Windows) enter setx http_proxy <proxy_ip>:<proxy_port> (example setx http_proxy 132.50.12.1:443) Restart your ODBC Program (ex, PowerBi) Retest your connection If it fails, add the https_proxy the same way. If it succeeds, add it the system permanently.  Ref# Configure the proxy server - Splunk Documentation # libcurl - programming tutorial # Set up proxy using http_proxy & https_proxy environment variable in Linux? | GoLinuxCloud # windows - Command line to remove an environment variable from the OS-level configuration - Stack Ove...
Probably proxy server (or firewall). Typically proxy server because the ODBC driver is not proxy aware like your internet browser is. This is why others mentioned configure the proxy without explan... See more...
Probably proxy server (or firewall). Typically proxy server because the ODBC driver is not proxy aware like your internet browser is. This is why others mentioned configure the proxy without explanation. The typical solution is to add an environmental variable that the ODBC driver can see. I would recommend testing it using the following process: At the command line (in Windows) enter setx http_proxy <proxy_ip>:<proxy_port> (example setx http_proxy 132.50.12.1:443) Restart your ODBC Program (ex, PowerBi) Retest your connection If it fails, add the https_proxy the same way. If it succeeds, add it the system permanently.  Ref# Configure the proxy server - Splunk Documentation # libcurl - programming tutorial # Set up proxy using http_proxy & https_proxy environment variable in Linux? | GoLinuxCloud # windows - Command line to remove an environment variable from the OS-level configuration - Stack Overflow  
It is not clear what you are trying to do with your sub-search. Please clarify in non-SPL terms, what it is that you are trying to achieve.
The only option would be an external mechanism to update the inputs.conf and reload the UF? For example, have a scheduled task every hour that compares the inputs.conf and IIS configuration - if dif... See more...
The only option would be an external mechanism to update the inputs.conf and reload the UF? For example, have a scheduled task every hour that compares the inputs.conf and IIS configuration - if different, update inputs.conf and reload UF? Kind Regards Andre
Possibly a silly question, but I've wondered this for a while and now it'd actually be exactly what I need; I've got a simple http traffic monitor dash, with a graph of status message counts. Underne... See more...
Possibly a silly question, but I've wondered this for a while and now it'd actually be exactly what I need; I've got a simple http traffic monitor dash, with a graph of status message counts. Underneath, I want a panel which summarises it but ideally in 1m bins per row eg. 9:00AM - [OK] = 500                    [Too many open files] = 30                    [Connection timed out] = 2                    [Connection refused] = 1 - 9:01AM - [OK] = 459                    [Too many open files] = 21                    [Connection timed out] = 3                    [Connection refused] = 2 - 9:02AM etc. Now obvs this is a trivial stats query with a little finessing, which I've added to my dashboard as a statistics panel underneath the graph of the counts over an hour.  This achieves a common monitoring goal. I've got the hour span graph and then at-a-glance reference with 5x1min snap shots of what the web server is experiencing currently..  However, the output of a stats panel doesn't exactly look the greatest as above, what I'm actually wondering is, can I make it appear like the attached image below of the bubble popup (not sure what you guys call this), when you click on any field within a Splunk search? Like that output is perfect at-a-glance detail, count, %, and a visual bar etc. It's exactly what i'm poorly trying to replicate with my stats panel, and tbh what I've poorly replicated in many other situations. Can I replicate that in a dash somehow? Have wanted to ask this for a while...
Yes you can use tokens from a dropdown as you suggested to limit the indexes searched.
I was able to get the details of my cloud instance by creating a new NetScaler data source that exposed the correct URL as one of it's fields.  I think this should work for me. Appreciate everyone... See more...
I was able to get the details of my cloud instance by creating a new NetScaler data source that exposed the correct URL as one of it's fields.  I think this should work for me. Appreciate everyone's help!   Wade
Thanks for the reply. What you said makes sense. I have a concern though. I looked at one of our typical UF installs and I verified that there already is a ../etc/system/local/server.conf. Since I'm ... See more...
Thanks for the reply. What you said makes sense. I have a concern though. I looked at one of our typical UF installs and I verified that there already is a ../etc/system/local/server.conf. Since I'm the admin and normally do all UF deployments, I know that this file was automatically generated when the forwarder was installed. As you suspected, it contains the hostname of the server.  Interestingly, the ../etc/system/default/server.conf contains serverName = $HOSTNAME. So the serverName field is populated when the UF is installed and a local/server.conf is created. The issue I have is that this would have to be overridden after a UF is installed. This is possible, but seems like it shouldn't be necessary. Thoughts?
Thank you for your replies.  I am looking to use this to monitor a Citrix environment with the Citrix Uber Agent  on both cloud and on-prem machines reporting to a Splunk Console and thus I figured t... See more...
Thank you for your replies.  I am looking to use this to monitor a Citrix environment with the Citrix Uber Agent  on both cloud and on-prem machines reporting to a Splunk Console and thus I figured the Cloud Splunk would be ideal.   This is a relatively new product on the Citrix side so the documentation is not fully formed.   The agent is configured via a .CONF file where the server URL and token are set but finding particulars on exactly what that will be gets glossed over in everything I've seen and the example in the file is only for an on-prem Splunk instance. This likely won't help but at least you can see where I'm coming from  Wade
Is there a particular reason you are looking to send out over HTTP Event Collector rather than the usual Splunk2Splunk approach using the settings provided in the Universal Forwarder app in your Splu... See more...
Is there a particular reason you are looking to send out over HTTP Event Collector rather than the usual Splunk2Splunk approach using the settings provided in the Universal Forwarder app in your Splunk Cloud instance? If you really do want to send over HTTPS instead then you will need to update the outputs.conf of your forwarder: To configure your on-premise Splunk Universal Forwarder to send data via HTTP to your new cloud instance,  First, create a HEC token in your cloud environment - For more info see the docs page. Then, modify the outputs.conf file located in $SPLUNK_HOME/etc/system/local/ (or equivalent in your setup). You should define your cloud instance's endpoint here. For example: [httpout] uri = https://http-inputs-<stackName>.splunkcloud.com:443 httpEventCollectorToken = <yourHECToken> More info on HTTP Output from Splunk docs  I hope this helps. Will
Is there a particular reason you are looking to send out over HTTP Event Collector rather than the usual Splunk2Splunk approach using the settings provided in the Universal Forwarder app in your Splu... See more...
Is there a particular reason you are looking to send out over HTTP Event Collector rather than the usual Splunk2Splunk approach using the settings provided in the Universal Forwarder app in your Splunk Cloud instance? If you really do want to send over HTTPS instead then you will need to update the outputs.conf of your forwarder: To configure your on-premise Splunk Universal Forwarder to send data via HTTP to your new cloud instance,  First, create a HEC token in your cloud environment - For more info see the docs page. Then, modify the outputs.conf file located in $SPLUNK_HOME/etc/system/local/ (or equivalent in your setup). You should define your cloud instance's endpoint here. For example: [httpout] uri = https://http-inputs-<stackName>.splunkcloud.com:443 httpEventCollectorToken = <yourHECToken> More info on HTTP Output from Splunk docs  I hope this helps. Will
The value used for the host in the metrics.log which I believe is the logs you are referring to which powers some of the Monitoring Console dashboards comes from the "serverName" field under the [gen... See more...
The value used for the host in the metrics.log which I believe is the logs you are referring to which powers some of the Monitoring Console dashboards comes from the "serverName" field under the [general] stanza of server.conf If you update your /opt/splunk/etc/system/local/server.conf file so that the serverName value under [general] is the correct name for your host then this should flow through to the Monitoring Console. Let me know how you get on! Regards Will
Unfortunately at this point you would need a reset license to remove the lock as it is reporting an enforced limit. You may be able to get this, along with an extended trial by contacting Splunk sale... See more...
Unfortunately at this point you would need a reset license to remove the lock as it is reporting an enforced limit. You may be able to get this, along with an extended trial by contacting Splunk sales, otherwise unfortunately I think its likely going to be a re-install to start again with a trial license. Regarding the nullQueue - this is where you could send subsets of data if you wanted to keep only some of the data ingested. It sounds like as you're only using a single source of data that you would find it easier to toggle off the input/source of the datafeed. Data that is sent to nullQueue will not be saved by Splunk. I hope this helps, even if not necessarily what you were hoping for!  Kind regards Will 
There are no "ERROR" messages associated with the message trace input, but there are numerous "INFO" messages that seem to indicate data is being successfully brought in: I just dont see anythin... See more...
There are no "ERROR" messages associated with the message trace input, but there are numerous "INFO" messages that seem to indicate data is being successfully brought in: I just dont see anything that looks like a message trace entry when searching the index that I've configured for these logs.  Unless it's these "Exchange" records that show operations like "Send"," MailItemsAccessed" etc, but I feel like those are coming from a different input (e.g., the "Mailbox Usage Detail" input):    
All the settings you need are in the "Universal Forwarder" app on your cloud instance.  Open that app, click the green Download button, then install the downloaded file in the Universal Forwarder on ... See more...
All the settings you need are in the "Universal Forwarder" app on your cloud instance.  Open that app, click the green Download button, then install the downloaded file in the Universal Forwarder on your Windows server.
TLDR; does, | search(), operate differently in tstats, especially with wildcards, NOT, OR, AND, parentheses, etc.? I'm dev/testing some queries with tstats and want to see if data modeling would... See more...
TLDR; does, | search(), operate differently in tstats, especially with wildcards, NOT, OR, AND, parentheses, etc.? I'm dev/testing some queries with tstats and want to see if data modeling would make our current alerts more efficient.  To test, I view the spl of an alert we use, and implement the fields of that alert into the root search of the Endpoint data model that comes with CIM.  However, we have a lot of exclusions/filters in this alert. (e.g. ignoring certain Account_Names, New_Process_Names, Creator_Process_Names, etc.) On the separate tstats query, I mimic most of everything else from the original alert, especially when the formatting of the tstats so that it mirrors the stats command from the original alert.  example: | stats count, values(field1) as field1 by field2, field3 >>>>>>>>>>>>>>> | tstats count, values(Processes.field1) as field1  FROM datamodel=Endpoint.Processes by Processes.field2, Processes.field3 Before I decide to accelerate the data model, I want to make sure the output of both the alert and tstats query are the same.  To control this, I set an arbitrary timeframe: earliest=-4h@h, latest=-2h@h and apply that to both queries.  In the tstats command, I do a pipe search ( | search () ) after the major tstats commands, and paste the exclusions/filters from the alert into that clause. It has a bunch of wildcards in it, for reasons I won't get into, and yes some of it is not great practice with leading wildcards, but the original alert works so its fine for now.  When I compare the output of both queries, the statistics is slightly off. Even though the I apply the timeframe, and I notice if tailor the wildcards a bit, it somewhat closes the gap, but not consistently, especially as I increase the timeframe.  This leads me to believe that | search() treats certain characters differently with tstats, and I don't know why.