All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We have data coming in that we need to alert on, however because of the formatting of the data, this is very hard to do.   The data is coming in as key value pairs but the values are not encapsulated... See more...
We have data coming in that we need to alert on, however because of the formatting of the data, this is very hard to do.   The data is coming in as key value pairs but the values are not encapsulated in quotes and is being truncated.  For example _Raw - filepath=c:\program files\abc123\ What we end up getting is Parsed - filepath=c:\program Everything after the space is ignored. If I wanted to find all occurrences where the path was c:\program files\abc123, I can't. We are sending the data via syslog to the splunk servers Thanks in advance!      
It needs to be    UseAck = false   Then this errors should resolve.
Hi @tscroggins , this solution isn't applicable to my situation because we are receiving one data flow, from one host with all the mixed data, so I cannot apply the sourcetype. I worked (and solved... See more...
Hi @tscroggins , this solution isn't applicable to my situation because we are receiving one data flow, from one host with all the mixed data, so I cannot apply the sourcetype. I worked (and solved) analyzing data and identifying the kind of data sources, then I took the related add-ons (Juniper, cisco:ios, cisco:ise, proofpoint, etc...) and I modified all the props.conf using the tranformations from the usual sourcetype (e.g. fgt_log) in the sourcetype I have i my dataflow. In this way I parsed all the data flows. Anyway, thank you fro your help. Ciao. Giuseppe
Data sent to a metrics index must be in a particular format.  See https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Metrics/GetMetricsInOther for the specifics. You should be able to set up... See more...
Data sent to a metrics index must be in a particular format.  See https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Metrics/GetMetricsInOther for the specifics. You should be able to set up the script as a scripted input that writes CSV data to stdout.  Splunk will index anything sent to stdout.
Is it standard for the Splunk server itself to be over 50% of the daily indexing total? In our production environment, we are starting run over the daily and simply because of the splunk server itsel... See more...
Is it standard for the Splunk server itself to be over 50% of the daily indexing total? In our production environment, we are starting run over the daily and simply because of the splunk server itself. I understand its what does the heavy lifting, but its hard to base how much licensing you may need when you dont know how to gauge what the server will utilize    
(In the example solution, you'll also need to add input/output configuration and/or parsing to strip unwanted extra timestamps and hosts from syslog messages.)
Hi @gcusello, Sorry for the delay. Did you find a working solution? My suggestion was something like: # inputs.conf [tcp://10514] sourcetype = syslog index = network [udp://10515] index = network... See more...
Hi @gcusello, Sorry for the delay. Did you find a working solution? My suggestion was something like: # inputs.conf [tcp://10514] sourcetype = syslog index = network [udp://10515] index = network sourcetype = infoblox:port [udp://10516] index = network sourcetype = juniper [udp://10517] index = network sourcetype = fgt_log # outputs.conf [syslog:infoblox] server = localhost:10515 type = udp priority = NO_PRI [syslog:juniper] server = localhost:10516 type = udp priority = NO_PRI [syslog:fortinet] server = localhost:10517 type = udp priority = NO_PRI # props.conf [source::tcp:10514] TRANSFORMS-reroute_syslog = route_infoblox, route_juniper, route_fortinet # transforms.conf [route_infoblox] REGEX = \<\d+\>\w+\s+\d+\s+\d+:\d+\d+:\d+\s+\w+-dns-\w+ DEST_KEY = _SYSLOG_ROUTING FORMAT = infoblox [route_juniper] REGEX = ^\<\d+\>\d+\s+\d+-\d+-\d+\w+:\d+:\d+\.\d+\w(\+|-)\d+:\d+\s\w+-edget-fw DEST_KEY = _SYSLOG_ROUTING FORMAT = juniper [route_fortinet] REGEX = ^\<\d+\>date\=\d+-\d+-\d+\s+time\=\d+:\d+:\d+\s+devname\=\"[^\"]+\"\s+devid DEST_KEY = _SYSLOG_ROUTING FORMAT = fortinet All events sent to the 10514/tcp input will hit the specified transforms. On match, the event will be reroute to one of the udp inputs using _SYSLOG_ROUTING. If the default syslog output queue size (97 KiB) isn't large enough, you can scale by increasing parallelIngestionPipelines (and resources if the HF performs other functions). I haven't tried increasing the syslog output queue size in some time, but it was hard-coded in the past. You can also use tcp inputs and type = tcp in syslog outputs, but when forwarding packets locally, the risk of loss comes from buffer/queue overruns, not the network. All that said, rsyslog or syslog-ng (my preference) installed on the same host is a better solution. You can preferably write and monitor files, or you can relay to local Splunk tcp/udp inputs. If you use files, you'll need adequate local storage for buffering and e.g. logrotate to manage retention. Both rsyslog and syslog-ng have mature and robust parsing languages.
I strongly encourage you to use CLI commands to add cluster members rather than editing config files.  The commands will update config files for you.  If you edit the files yourself then Splunk must ... See more...
I strongly encourage you to use CLI commands to add cluster members rather than editing config files.  The commands will update config files for you.  If you edit the files yourself then Splunk must be restarted for the edits to take effect. To add a SHC member, see https://docs.splunk.com/Documentation/Splunk/9.1.2/DistSearch/Addaclustermember#Add_the_instance To add an indexer to a cluster, see https://docs.splunk.com/Documentation/Splunk/9.1.2/Indexer/Addclusterpeer
Here is a more complete process from Splunk https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/Secureyouradminaccount
I did copy the example text and ingest it successfully.  I did not see the encoded text you see.
Hi, How to add MSAL4J.jar to DB_Connect. I am getting error: Failed to load MSAL4J Java library for performing ActiveDirectoryServicePrincipal authentication.
Hi @jbv, for my knowledge the correct mapping of data is done by the Add-ons, so if you have the correct add-ons you have the mapping and normalization requested by ES that I'm using in many of our ... See more...
Hi @jbv, for my knowledge the correct mapping of data is done by the Add-ons, so if you have the correct add-ons you have the mapping and normalization requested by ES that I'm using in many of our customers, taking syslogs with rsyslog. Ciao. Giuseppe
Hi all, We need to add a couple dozen new search head peers to search head deployer, as well as adding a couple dozen indexers to a cluster master and would like to script this implementation.  I n... See more...
Hi all, We need to add a couple dozen new search head peers to search head deployer, as well as adding a couple dozen indexers to a cluster master and would like to script this implementation.  I need to know what configuration files need to be modified to join these new search head peers and indexer to the existing Splunk environment.  We are plan on running an Ansible script for this implementation project.  /Paul
Hi @Ryan.Paredez ,  Thank you for your response. The page you advised to check is telling more information about BT and RUM and not sure about Synthetic monitoring. I 'm facing issue with how to ac... See more...
Hi @Ryan.Paredez ,  Thank you for your response. The page you advised to check is telling more information about BT and RUM and not sure about Synthetic monitoring. I 'm facing issue with how to accept the Cookie page while developing the script for user journey, need some information on how to accept/reject the cookie consent that appears very before the application base page. Or guide me, if my understanding is not right here. Thank you, Mahendra Shetty
We have that false positives lately too and we found out with helkp of the following search that our peers ran into authTokenConnectionTimeout which defaults to 5 seconds authTokenConnectionTimeo... See more...
We have that false positives lately too and we found out with helkp of the following search that our peers ran into authTokenConnectionTimeout which defaults to 5 seconds authTokenConnectionTimeout is located in distsearch.conf       index=_internal (GetRemoteAuthToken OR DistributedPeer OR DistributedPeerManager) source!="/opt/splunk/var/log/splunk/remote_searches.log" | rex field=_raw "Peer:(?<peer>\S+)" | rex field=_raw "peer: (?<peer>\S+)" | rex field=_raw "uri=(?<peer>\S+)" | eval peer = replace(peer, "https://", "") | rex field=_raw "\d+-\d+-\d+\s+\d+:\d+:\d+.\d+\s+\S+\s+(?<loglevel>\S+)\s+(?<process>\S+)" | rex field=_raw "\] - (?<logMsg>.+)" | reverse | eval time=strftime(_time, "%d.%m.%Y %H:%M:%S.%Q") | bin span=1d _time | stats list(*) as * by peer _time | table peer time loglevel process logMsg    
Hello Experts, I'm facing challenge where I need to automatically load data from Python script results into a metric index in Splunk. Is it possible? I'd appreciate any guidance or examples how to... See more...
Hello Experts, I'm facing challenge where I need to automatically load data from Python script results into a metric index in Splunk. Is it possible? I'd appreciate any guidance or examples how to achieve this? Thanks
having the same issue for weekdays and month dates.  Is this something that will happen or we need to fix it ourselves creatively ? mer. 13 déc. 2023 23:31:20 CET file_hash=96def1... mar. 19 d... See more...
having the same issue for weekdays and month dates.  Is this something that will happen or we need to fix it ourselves creatively ? mer. 13 déc. 2023 23:31:20 CET file_hash=96def1... mar. 19 déc. 2023 22:06:55 CET user=x ...  mar. 19 déc. 2023 09:16:13 CET user=y ...
The issue is not fixed after upgrading 9.1.2. This issue occured on search head cluster.  My settings in outputs.conf :  [indexer_discovery:target_master] pass4SymmKey = ********** [tcpout] defa... See more...
The issue is not fixed after upgrading 9.1.2. This issue occured on search head cluster.  My settings in outputs.conf :  [indexer_discovery:target_master] pass4SymmKey = ********** [tcpout] defaultGroup = default_indexers forceTimebasedAutoLB = true maxQueueSize = 7MB useACK = true [tcpout:default_indexers] server = **********01:9997,**********02.lan:9997
I use a workaround. Background of what I did: I am writing a bash script, to create Splunk diag using splunk user and move the diag file to the desired folder of mine. [root@myserver ~]# vi test... See more...
I use a workaround. Background of what I did: I am writing a bash script, to create Splunk diag using splunk user and move the diag file to the desired folder of mine. [root@myserver ~]# vi test_script.sh sudo -i -u splunk bash << EOF # Create Splunk Diagnostic Report /opt/splunk/bin/splunk diag > /opt/splunk/mylog.log EOF # Use grep to extract the path from the output diag_path=$(cat /opt/splunk/mylog.log | grep -oP '(?<=Splunk diagnosis file created: )/.*\.tar\.gz') echo $diag_path # Check if the path is not empty if [ -n "$diag_path" ]; then      # Copy the file to /root/mydesiredfolder      mv "$diag_path" /root/mydesiredfolder      echo "File copied successfully to /root/mydesiredfolder" else      echo "Path not found or command did not generate the expected output" fi # Cleanup rm /opt/splunk/mylog.log ########## The idea behind it is when running ./splunk diag, it will have the output of something like this Splunk diagnosis file created: /opt/splunk/diag-servername-2023-12-22_08-19-01.tar.gz
I would like to create an answer, but could you please tell me the final form you would like to create?