All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

 
Please share the props and transforms for that sourcetype as well as a couple of sanitized sample events. 
@richgalloway  Can you share a picture of the sourcetype along with the Splunk web screenshot? I am still getting errors on my end. Thanks
Hi @secphilomath1 , what technology are you using for these data? if they are standard, you can use the related add-on that gives you al the parsing rules. If it's custom, you have t omanually par... See more...
Hi @secphilomath1 , what technology are you using for these data? if they are standard, you can use the related add-on that gives you al the parsing rules. If it's custom, you have t omanually parse it. Ciao. Giuseppe
Hi @jbthomas1975, What are you collecting from your Splunk host? Internal indexes--_audit, _internal, _introspection, _metrics, etc.--don't count against ingest-based licensing but do factor into c... See more...
Hi @jbthomas1975, What are you collecting from your Splunk host? Internal indexes--_audit, _internal, _introspection, _metrics, etc.--don't count against ingest-based licensing but do factor into capacity planning.
The solution above used a single external input (10514/tcp) and transforms to route events to three internal inputs (10515-10517/udp) based on content, but I'm glad you worked it out! In practice, I ... See more...
The solution above used a single external input (10514/tcp) and transforms to route events to three internal inputs (10515-10517/udp) based on content, but I'm glad you worked it out! In practice, I use syslog-ng.
We have data coming in that we need to alert on, however because of the formatting of the data, this is very hard to do.   The data is coming in as key value pairs but the values are not encapsulated... See more...
We have data coming in that we need to alert on, however because of the formatting of the data, this is very hard to do.   The data is coming in as key value pairs but the values are not encapsulated in quotes and is being truncated.  For example _Raw - filepath=c:\program files\abc123\ What we end up getting is Parsed - filepath=c:\program Everything after the space is ignored. If I wanted to find all occurrences where the path was c:\program files\abc123, I can't. We are sending the data via syslog to the splunk servers Thanks in advance!      
It needs to be    UseAck = false   Then this errors should resolve.
Hi @tscroggins , this solution isn't applicable to my situation because we are receiving one data flow, from one host with all the mixed data, so I cannot apply the sourcetype. I worked (and solved... See more...
Hi @tscroggins , this solution isn't applicable to my situation because we are receiving one data flow, from one host with all the mixed data, so I cannot apply the sourcetype. I worked (and solved) analyzing data and identifying the kind of data sources, then I took the related add-ons (Juniper, cisco:ios, cisco:ise, proofpoint, etc...) and I modified all the props.conf using the tranformations from the usual sourcetype (e.g. fgt_log) in the sourcetype I have i my dataflow. In this way I parsed all the data flows. Anyway, thank you fro your help. Ciao. Giuseppe
Data sent to a metrics index must be in a particular format.  See https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Metrics/GetMetricsInOther for the specifics. You should be able to set up... See more...
Data sent to a metrics index must be in a particular format.  See https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Metrics/GetMetricsInOther for the specifics. You should be able to set up the script as a scripted input that writes CSV data to stdout.  Splunk will index anything sent to stdout.
Is it standard for the Splunk server itself to be over 50% of the daily indexing total? In our production environment, we are starting run over the daily and simply because of the splunk server itsel... See more...
Is it standard for the Splunk server itself to be over 50% of the daily indexing total? In our production environment, we are starting run over the daily and simply because of the splunk server itself. I understand its what does the heavy lifting, but its hard to base how much licensing you may need when you dont know how to gauge what the server will utilize    
(In the example solution, you'll also need to add input/output configuration and/or parsing to strip unwanted extra timestamps and hosts from syslog messages.)
Hi @gcusello, Sorry for the delay. Did you find a working solution? My suggestion was something like: # inputs.conf [tcp://10514] sourcetype = syslog index = network [udp://10515] index = network... See more...
Hi @gcusello, Sorry for the delay. Did you find a working solution? My suggestion was something like: # inputs.conf [tcp://10514] sourcetype = syslog index = network [udp://10515] index = network sourcetype = infoblox:port [udp://10516] index = network sourcetype = juniper [udp://10517] index = network sourcetype = fgt_log # outputs.conf [syslog:infoblox] server = localhost:10515 type = udp priority = NO_PRI [syslog:juniper] server = localhost:10516 type = udp priority = NO_PRI [syslog:fortinet] server = localhost:10517 type = udp priority = NO_PRI # props.conf [source::tcp:10514] TRANSFORMS-reroute_syslog = route_infoblox, route_juniper, route_fortinet # transforms.conf [route_infoblox] REGEX = \<\d+\>\w+\s+\d+\s+\d+:\d+\d+:\d+\s+\w+-dns-\w+ DEST_KEY = _SYSLOG_ROUTING FORMAT = infoblox [route_juniper] REGEX = ^\<\d+\>\d+\s+\d+-\d+-\d+\w+:\d+:\d+\.\d+\w(\+|-)\d+:\d+\s\w+-edget-fw DEST_KEY = _SYSLOG_ROUTING FORMAT = juniper [route_fortinet] REGEX = ^\<\d+\>date\=\d+-\d+-\d+\s+time\=\d+:\d+:\d+\s+devname\=\"[^\"]+\"\s+devid DEST_KEY = _SYSLOG_ROUTING FORMAT = fortinet All events sent to the 10514/tcp input will hit the specified transforms. On match, the event will be reroute to one of the udp inputs using _SYSLOG_ROUTING. If the default syslog output queue size (97 KiB) isn't large enough, you can scale by increasing parallelIngestionPipelines (and resources if the HF performs other functions). I haven't tried increasing the syslog output queue size in some time, but it was hard-coded in the past. You can also use tcp inputs and type = tcp in syslog outputs, but when forwarding packets locally, the risk of loss comes from buffer/queue overruns, not the network. All that said, rsyslog or syslog-ng (my preference) installed on the same host is a better solution. You can preferably write and monitor files, or you can relay to local Splunk tcp/udp inputs. If you use files, you'll need adequate local storage for buffering and e.g. logrotate to manage retention. Both rsyslog and syslog-ng have mature and robust parsing languages.
I strongly encourage you to use CLI commands to add cluster members rather than editing config files.  The commands will update config files for you.  If you edit the files yourself then Splunk must ... See more...
I strongly encourage you to use CLI commands to add cluster members rather than editing config files.  The commands will update config files for you.  If you edit the files yourself then Splunk must be restarted for the edits to take effect. To add a SHC member, see https://docs.splunk.com/Documentation/Splunk/9.1.2/DistSearch/Addaclustermember#Add_the_instance To add an indexer to a cluster, see https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Addclusterpeer
Here is a more complete process from Splunk https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/Secureyouradminaccount
I did copy the example text and ingest it successfully.  I did not see the encoded text you see.
Hi, How to add MSAL4J.jar to DB_Connect. I am getting error: Failed to load MSAL4J Java library for performing ActiveDirectoryServicePrincipal authentication.
Hi @jbv, for my knowledge the correct mapping of data is done by the Add-ons, so if you have the correct add-ons you have the mapping and normalization requested by ES that I'm using in many of our ... See more...
Hi @jbv, for my knowledge the correct mapping of data is done by the Add-ons, so if you have the correct add-ons you have the mapping and normalization requested by ES that I'm using in many of our customers, taking syslogs with rsyslog. Ciao. Giuseppe
Hi all, We need to add a couple dozen new search head peers to search head deployer, as well as adding a couple dozen indexers to a cluster master and would like to script this implementation.  I n... See more...
Hi all, We need to add a couple dozen new search head peers to search head deployer, as well as adding a couple dozen indexers to a cluster master and would like to script this implementation.  I need to know what configuration files need to be modified to join these new search head peers and indexer to the existing Splunk environment.  We are plan on running an Ansible script for this implementation project.  /Paul
Hi @Ryan.Paredez ,  Thank you for your response. The page you advised to check is telling more information about BT and RUM and not sure about Synthetic monitoring. I 'm facing issue with how to ac... See more...
Hi @Ryan.Paredez ,  Thank you for your response. The page you advised to check is telling more information about BT and RUM and not sure about Synthetic monitoring. I 'm facing issue with how to accept the Cookie page while developing the script for user journey, need some information on how to accept/reject the cookie consent that appears very before the application base page. Or guide me, if my understanding is not right here. Thank you, Mahendra Shetty