All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for the reply. What you said makes sense. I have a concern though. I looked at one of our typical UF installs and I verified that there already is a ../etc/system/local/server.conf. Since I'm ... See more...
Thanks for the reply. What you said makes sense. I have a concern though. I looked at one of our typical UF installs and I verified that there already is a ../etc/system/local/server.conf. Since I'm the admin and normally do all UF deployments, I know that this file was automatically generated when the forwarder was installed. As you suspected, it contains the hostname of the server.  Interestingly, the ../etc/system/default/server.conf contains serverName = $HOSTNAME. So the serverName field is populated when the UF is installed and a local/server.conf is created. The issue I have is that this would have to be overridden after a UF is installed. This is possible, but seems like it shouldn't be necessary. Thoughts?
Thank you for your replies.  I am looking to use this to monitor a Citrix environment with the Citrix Uber Agent  on both cloud and on-prem machines reporting to a Splunk Console and thus I figured t... See more...
Thank you for your replies.  I am looking to use this to monitor a Citrix environment with the Citrix Uber Agent  on both cloud and on-prem machines reporting to a Splunk Console and thus I figured the Cloud Splunk would be ideal.   This is a relatively new product on the Citrix side so the documentation is not fully formed.   The agent is configured via a .CONF file where the server URL and token are set but finding particulars on exactly what that will be gets glossed over in everything I've seen and the example in the file is only for an on-prem Splunk instance. This likely won't help but at least you can see where I'm coming from  Wade
Is there a particular reason you are looking to send out over HTTP Event Collector rather than the usual Splunk2Splunk approach using the settings provided in the Universal Forwarder app in your Splu... See more...
Is there a particular reason you are looking to send out over HTTP Event Collector rather than the usual Splunk2Splunk approach using the settings provided in the Universal Forwarder app in your Splunk Cloud instance? If you really do want to send over HTTPS instead then you will need to update the outputs.conf of your forwarder: To configure your on-premise Splunk Universal Forwarder to send data via HTTP to your new cloud instance,  First, create a HEC token in your cloud environment - For more info see the docs page. Then, modify the outputs.conf file located in $SPLUNK_HOME/etc/system/local/ (or equivalent in your setup). You should define your cloud instance's endpoint here. For example: [httpout] uri = https://http-inputs-<stackName>.splunkcloud.com:443 httpEventCollectorToken = <yourHECToken> More info on HTTP Output from Splunk docs  I hope this helps. Will
Is there a particular reason you are looking to send out over HTTP Event Collector rather than the usual Splunk2Splunk approach using the settings provided in the Universal Forwarder app in your Splu... See more...
Is there a particular reason you are looking to send out over HTTP Event Collector rather than the usual Splunk2Splunk approach using the settings provided in the Universal Forwarder app in your Splunk Cloud instance? If you really do want to send over HTTPS instead then you will need to update the outputs.conf of your forwarder: To configure your on-premise Splunk Universal Forwarder to send data via HTTP to your new cloud instance,  First, create a HEC token in your cloud environment - For more info see the docs page. Then, modify the outputs.conf file located in $SPLUNK_HOME/etc/system/local/ (or equivalent in your setup). You should define your cloud instance's endpoint here. For example: [httpout] uri = https://http-inputs-<stackName>.splunkcloud.com:443 httpEventCollectorToken = <yourHECToken> More info on HTTP Output from Splunk docs  I hope this helps. Will
The value used for the host in the metrics.log which I believe is the logs you are referring to which powers some of the Monitoring Console dashboards comes from the "serverName" field under the [gen... See more...
The value used for the host in the metrics.log which I believe is the logs you are referring to which powers some of the Monitoring Console dashboards comes from the "serverName" field under the [general] stanza of server.conf If you update your /opt/splunk/etc/system/local/server.conf file so that the serverName value under [general] is the correct name for your host then this should flow through to the Monitoring Console. Let me know how you get on! Regards Will
Unfortunately at this point you would need a reset license to remove the lock as it is reporting an enforced limit. You may be able to get this, along with an extended trial by contacting Splunk sale... See more...
Unfortunately at this point you would need a reset license to remove the lock as it is reporting an enforced limit. You may be able to get this, along with an extended trial by contacting Splunk sales, otherwise unfortunately I think its likely going to be a re-install to start again with a trial license. Regarding the nullQueue - this is where you could send subsets of data if you wanted to keep only some of the data ingested. It sounds like as you're only using a single source of data that you would find it easier to toggle off the input/source of the datafeed. Data that is sent to nullQueue will not be saved by Splunk. I hope this helps, even if not necessarily what you were hoping for!  Kind regards Will 
There are no "ERROR" messages associated with the message trace input, but there are numerous "INFO" messages that seem to indicate data is being successfully brought in: I just dont see anythin... See more...
There are no "ERROR" messages associated with the message trace input, but there are numerous "INFO" messages that seem to indicate data is being successfully brought in: I just dont see anything that looks like a message trace entry when searching the index that I've configured for these logs.  Unless it's these "Exchange" records that show operations like "Send"," MailItemsAccessed" etc, but I feel like those are coming from a different input (e.g., the "Mailbox Usage Detail" input):    
All the settings you need are in the "Universal Forwarder" app on your cloud instance.  Open that app, click the green Download button, then install the downloaded file in the Universal Forwarder on ... See more...
All the settings you need are in the "Universal Forwarder" app on your cloud instance.  Open that app, click the green Download button, then install the downloaded file in the Universal Forwarder on your Windows server.
TLDR; does, | search(), operate differently in tstats, especially with wildcards, NOT, OR, AND, parentheses, etc.? I'm dev/testing some queries with tstats and want to see if data modeling would... See more...
TLDR; does, | search(), operate differently in tstats, especially with wildcards, NOT, OR, AND, parentheses, etc.? I'm dev/testing some queries with tstats and want to see if data modeling would make our current alerts more efficient.  To test, I view the spl of an alert we use, and implement the fields of that alert into the root search of the Endpoint data model that comes with CIM.  However, we have a lot of exclusions/filters in this alert. (e.g. ignoring certain Account_Names, New_Process_Names, Creator_Process_Names, etc.) On the separate tstats query, I mimic most of everything else from the original alert, especially when the formatting of the tstats so that it mirrors the stats command from the original alert.  example: | stats count, values(field1) as field1 by field2, field3 >>>>>>>>>>>>>>> | tstats count, values(Processes.field1) as field1  FROM datamodel=Endpoint.Processes by Processes.field2, Processes.field3 Before I decide to accelerate the data model, I want to make sure the output of both the alert and tstats query are the same.  To control this, I set an arbitrary timeframe: earliest=-4h@h, latest=-2h@h and apply that to both queries.  In the tstats command, I do a pipe search ( | search () ) after the major tstats commands, and paste the exclusions/filters from the alert into that clause. It has a bunch of wildcards in it, for reasons I won't get into, and yes some of it is not great practice with leading wildcards, but the original alert works so its fine for now.  When I compare the output of both queries, the statistics is slightly off. Even though the I apply the timeframe, and I notice if tailor the wildcards a bit, it somewhat closes the gap, but not consistently, especially as I increase the timeframe.  This leads me to believe that | search() treats certain characters differently with tstats, and I don't know why.
Hello @gomitamu , CyberArk TA supports only CyberArk v12. Official support for v14 is not available at this time. However you can use same TA to get data and twick the props if needed, i have seen s... See more...
Hello @gomitamu , CyberArk TA supports only CyberArk v12. Official support for v14 is not available at this time. However you can use same TA to get data and twick the props if needed, i have seen some people using this TA with v14 and is working fine for them.
Hello @gomitamu , CyberArk TA supports only CyberArk v12. Official support for v14 is not available at this time. However you can use same TA to get data and twick the props if needed, i have seen s... See more...
Hello @gomitamu , CyberArk TA supports only CyberArk v12. Official support for v14 is not available at this time. However you can use same TA to get data and twick the props if needed, i have seen some people using this TA with v14 and is working fine for them.  
Hello @shaunm001 , You should first check internal logs in Splunk using Query such as, index="_internal" *O365* *ERROR* Based on ERROR logs we can troubleshoot this further.
This worked well. Last question: If i wanted to ensure the single record that i find only comes from search 1 and not from search 2. how would i do that. Thanks again Todd
This worked well. Last question: If i wanted to ensure the single record that i find only comes from search 1 and not from search 2. how would i do that. Thanks again Todd
index=cim_modactions source=/opt/splunk/var/log/splunk/incident_ticket_creation_modalert.log host=sh* search_name=* source=* sourcetype=modular_alerts:incident_ticket_creation user=* action_mode=* ac... See more...
index=cim_modactions source=/opt/splunk/var/log/splunk/incident_ticket_creation_modalert.log host=sh* search_name=* source=* sourcetype=modular_alerts:incident_ticket_creation user=* action_mode=* action_status=* search_name=kafka* [| rest /servicesNS/-/-/saved/searches | search title=kafka* | rename dispatch.earliest_time AS "frequency", title AS "title", eai:acl.app AS "app", next_scheduled_time AS "nextRunTime", search AS "query", updated AS "lastUpdated", action.email.to AS "emailTo", action.email.cc AS "emailCC", action.email.subject AS "emailSubject", alert.severity AS "SEV" | eval severity=case(SEV == "5", "Critical-5", SEV == "4", "High-4",SEV == "3", "Warning-3",SEV == "2", "Low-2",SEV == "1", "Info-1") | eval identifierDate=now() | convert ctime(identifierDate) AS identifierDate | table identifierDate title lastUpdated, nextRunTime, emailTo, query, severity, emailTo, actions | fillnull value="" | sort -lastUpdated actions] | table user search_name action_status date_month date_year _time
Hi @Karthikeya  It could be access permission to the extracted field. Go to the menu Settings > Fields, click on Field Extractions, and check if the permission for your field is correct. To ensure a... See more...
Hi @Karthikeya  It could be access permission to the extracted field. Go to the menu Settings > Fields, click on Field Extractions, and check if the permission for your field is correct. To ensure access for all users, set the app permissions to global and the Role permissions to Read for Everyone.
I'm struggling to get data in from Infoblox using Splunk Add-on for Infoblox.  I looked at the documentation and realized it doesn't support the current versions.  I'm using Infoblox NIOS 9.0.3.  The... See more...
I'm struggling to get data in from Infoblox using Splunk Add-on for Infoblox.  I looked at the documentation and realized it doesn't support the current versions.  I'm using Infoblox NIOS 9.0.3.  The Splunk documentation says it supports Infoblox NIOS 8.4.4, 8.5.2, 8.6.2. Specifically, it's not parsing correctly, and everything goes into sourcetype=infoblox:port. Are there any more current ways to get data in from Infoblox?  Can I get Splunk support to help me since it's a Splunk-supported Add-on?
How do I determine the server setting for my on-premise agent config trying to send data via HTTP from a Windows server to my new cloud instance? 
I have not found any new information.  I opened a support ticket to see if they could help.
This advice continues to be helpful, thank you!