All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

https://docs.splunk.com/Documentation/Splunk/9.3.2/Alert/Emailnotification#Define_an_email_notification_action_for_an_alert_or_scheduled_report Right now the domain setting is still listed at 'Optio... See more...
https://docs.splunk.com/Documentation/Splunk/9.3.2/Alert/Emailnotification#Define_an_email_notification_action_for_an_alert_or_scheduled_report Right now the domain setting is still listed at 'Optional' for the documentation which obviously hasn't caught up with the default install health checks.  So you wont find the supporting information you are requesting just yet.  But I have been in the security side of corporate life for some time.  Giving users the default ability to email alerts or reports to any destination is a massive Data Loss Protection issue.
Switched the inputs to 2154 .. still no luck.
Port 9997 is a reserved port for splunk - if this is an external stream from syslog or any other source please select a different port. Example port=2514 I selected that as 514 is syslog reserved ... See more...
Port 9997 is a reserved port for splunk - if this is an external stream from syslog or any other source please select a different port. Example port=2514 I selected that as 514 is syslog reserved and 1514 I have seen for TCP encrypted syslog so best to just get up and away from that.  But by keeping the *514 format it will be easier for others who may inherit your setup to know instinctively that it's a syslog source.
For testing, I tied the props you provided along with these inputs.conf Test 1: [splunktcp://9997] index = mmsproxy source = tcp.bluecoat sourcetype = bluecoat:proxysg:access:syslog disabled = fals... See more...
For testing, I tied the props you provided along with these inputs.conf Test 1: [splunktcp://9997] index = mmsproxy source = tcp.bluecoat sourcetype = bluecoat:proxysg:access:syslog disabled = false Test 2: [splunktcp://9997] index = mmsproxy source = tcp.bluecoat sourcetype = bluecoat disabled = false   Restarted Splunk on both these tests. Still no luck.
I deleted my custom dashboard from the dashboard list on my AppDynamics SaaS Controller, is there a way I can recover a deleted dashboard?
Yes, I did restart splunk after each conf change Here's the inputs.conf [splunktcp://9997] index = mmsproxy source = tcp.bluecoat sourcetype = bluecoat:proxysg:access:syslog disabled = false   ... See more...
Yes, I did restart splunk after each conf change Here's the inputs.conf [splunktcp://9997] index = mmsproxy source = tcp.bluecoat sourcetype = bluecoat:proxysg:access:syslog disabled = false   Will check you props too and respond back in a few min
Hi @Ben , you should change the grants in the apps containing the other dashboards. Then, how do you created the new role? did you created by scratch or cloning another one or did you used an inher... See more...
Hi @Ben , you should change the grants in the apps containing the other dashboards. Then, how do you created the new role? did you created by scratch or cloning another one or did you used an inheritance? Don't use inheritance. Ciao. Giuseppe
Hi @rahusri2 , as I said,you can install a Splunk Heavy Forwarder and configure it exactly as the on-premise receiver. Then, to forward data to Splunk Cloud, you have to download from your Splunk C... See more...
Hi @rahusri2 , as I said,you can install a Splunk Heavy Forwarder and configure it exactly as the on-premise receiver. Then, to forward data to Splunk Cloud, you have to download from your Splunk Cloud instance the Forwarders app and install it on the Heavy Forwarder, otherwise it cannot send logs to Splunk Cloud. Ciao. Giuseppe
The default one "mscs:azure:eventhub" doesn't work at all. For some other Inputs i used "ms:o365:management" which extracts for some Inputs. But we have several sources like AzureAD,Exchange and ... See more...
The default one "mscs:azure:eventhub" doesn't work at all. For some other Inputs i used "ms:o365:management" which extracts for some Inputs. But we have several sources like AzureAD,Exchange and all the other MS products and it's not to clear to me which sourcetype I should use.
Hi @Utkc137 , sorry for the very stupid question: did you restarted your Splunk server after conf update? Could you share the inputs.conf you are using? Please thy this: [bluecoat] TIME_FORMAT=%Y... See more...
Hi @Utkc137 , sorry for the very stupid question: did you restarted your Splunk server after conf update? Could you share the inputs.conf you are using? Please thy this: [bluecoat] TIME_FORMAT=%Y-%m-%d %H:%M:%S TIME_PREFIX=^ rename=bluecoat:proxysg:access:syslog [bluecoat:proxysg:access:syslog] TIME_FORMAT=%Y-%m-%d %H:%M:%S TIME_PREFIX=^ pulldown_type = true category = Network & Security description = Data from Blue Coat ProxySG in W3C ELFF format thru syslog KV_MODE = none SHOULD_LINEMERGE = false EVENT_BREAKER_ENABLE=true MAX_DAYS_AGO = 10951 TRUNCATE = 64000 in local/props.conf Ciao. Giuseppe
Can you share the inputs stanza you have for listening to the TCP stream? Inside the default application props is: [bluecoat] rename=bluecoat:proxysg:access:syslog This occurs at search time only ... See more...
Can you share the inputs stanza you have for listening to the TCP stream? Inside the default application props is: [bluecoat] rename=bluecoat:proxysg:access:syslog This occurs at search time only per the instructions at: https://docs.splunk.com/Documentation/Splunk/9.3.0/Admin/Propsconf#Sourcetype_configuration rename = <string> * Renames [<sourcetype>] as <string> at search time * With renaming, you can search for the [<sourcetype>] with sourcetype=<string> * To search for the original source type without renaming it, use the field _sourcetype. * Data from a renamed sourcetype only uses the search-time configuration for the target sourcetype. Field extractions (REPORTS/EXTRACT) for this stanza sourcetype are ignored. * Default: empty string This leaves any _time extraction issues with the source type identified in the inputs.conf stanza.
Hello @gcusello, Thanks for your reply, really appreciated. let m,e understand: you have a Forwarder (UF or HF) using the outputs.conf you shared to forward logs to Splunk C loud that receives s... See more...
Hello @gcusello, Thanks for your reply, really appreciated. let m,e understand: you have a Forwarder (UF or HF) using the outputs.conf you shared to forward logs to Splunk C loud that receives syslogs (using UDP on port 8125), is it correct? I have a StatsD server configured on my local, running on port 8125 (UDP), and it generates some metric data. Currently, this application using statsd server is sending metrics to Splunk Enterprise (running locally). I can view all the metrics from the Splunk analytics workspace without any issues. Now, I want to forward all application metrics from the StatsD server (running on port 8125 UDP) to Splunk Cloud instead Splunk Enterprise. I have read in couple of document, for this use case we have to use heavy fordwarder. To achieve this, I added the Splunk Cloud address "prd-p-7mh2z.splunkcloud.com:9997" in "Forwarding and receiving → Configure forwarding" but encountering the following error:   The TCP output processor has paused the data flow. Forwarding to host_dest=prd-p-7mh2z.splunkcloud.com inside output group default-autolb-group from host_src=rahusri2s-MacBook-Pro.local has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.   # cat /Applications/splunk/etc/system/local/outputs.conf [tcpout] defaultGroup = default-autolb-group indexAndForward = 1 [tcpout:default-autolb-group] server = prd-p-7mh2z.splunkcloud.com:9997 # cat /Applications/splunk/etc/apps/search/local/inputs.conf [splunktcp://9997] connection_host = ip [udp://8125] connection_host = dns host = rahusri2s-MacBook-Pro.local index = 4_dec_8125_udp sourcetype = statsd Thank You.
| rest splunk_server=local /services/authorization/roles | rename title as role | table role capabilities imported_capabilities imported_roles Sorry to belabor this point but I'm not certain you hav... See more...
| rest splunk_server=local /services/authorization/roles | rename title as role | table role capabilities imported_capabilities imported_roles Sorry to belabor this point but I'm not certain you have answered my question.  Does the role import another role which has the setting?  The above REST call on the Search Head the user is assigned will tell you the exact information. If you have already checked and no stray imports are occurring then my apologies for keeping after this point.  I've reviewed the documentation on capabilities and just can't find anything that would explain the user behavior.
Hello, I've created a simple app, let's call it IT_Users_App,  linked to a certain role called it_user.  In the app, a user with the role above, can see hundreds of OOTB dashboards by default.  I ... See more...
Hello, I've created a simple app, let's call it IT_Users_App,  linked to a certain role called it_user.  In the app, a user with the role above, can see hundreds of OOTB dashboards by default.  I would like to hide those OOTB dashboards from the app / role, in a bulk action. Doing so one by one will not be fun Is there a way to accomplish that?    Thanks in advance.  
Just tested with bluecoat sourcetype .. no luck. It's a standalone splunk instance (dev env).
Hi @Utkc137 , then, where do you located the add-on? it should be in the first HF data passed through or (if HFs aren't present) in the Indexers. Ciao. Giuseppe
Hi @Utkc137 , did you tried with the sourcetype "bluecoat"? that should be the one you assigned to your input. Ciao. Giuseppe
Also, the sourcetype I used originally is also mentioned in the inputs.conf .. and remains the same until the logs are ingested
Just tested using source in the props stanza name (source is define in inputs.conf) and it's still picking up the index time as the timestamp
Hi @Utkc137 , there's a priority in conf files reading and in that add-on there are some tranformations, so probably the sourcetype you added isn't present when the local file is read and created af... See more...
Hi @Utkc137 , there's a priority in conf files reading and in that add-on there are some tranformations, so probably the sourcetype you added isn't present when the local file is read and created after using a transformation, see the default sourcetype and try adding your configuration to this sourcetype. Ciao. Giuseppe