All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi, need some help, i have this format type but it seems the word 'up' is not matching for whatever reason. there is no spaces or anything in the field value.  the field value is extracted using ... See more...
hi, need some help, i have this format type but it seems the word 'up' is not matching for whatever reason. there is no spaces or anything in the field value.  the field value is extracted using 'rex'. i have this working in other fields, but this one got me stuck.  any help will be appreciated.    <format type="color" field="state"> <colorPalette type="expression">if (value == "up","#Green", "#Yellow")</colorPalette> </format>    
can you provide an example values.yaml with what you have in mind?  Are you saying it should be this code below? if so, this yaml seems to output gateway logs but its not getting picked up and sent ... See more...
can you provide an example values.yaml with what you have in mind?  Are you saying it should be this code below? if so, this yaml seems to output gateway logs but its not getting picked up and sent through to splunk. clusterName: CHANGEME splunkObservability: realm: CHANGEME accessToken: CHANGEME gateway: enabled: true replicaCount: 1 resources: limits: cpu: 2 memory: 4Gi agent: enabled: false clusterReceiver: enabled: false logsCollection: containers: excludeAgentLogs: false  
Hi @FPERVIL, see this document that answers to your requisite: https://docs.splunk.com/Documentation/Splunk/9.2.0/Forwarding/Routeandfilterdatad#Filter_and_route_event_data_to_target_groups Ciao. ... See more...
Hi @FPERVIL, see this document that answers to your requisite: https://docs.splunk.com/Documentation/Splunk/9.2.0/Forwarding/Routeandfilterdatad#Filter_and_route_event_data_to_target_groups Ciao. Giuseppe
i at all, I'm ingesting data using HEC in a distributed infratructure using a Load Balancer to distribute traffic from many senders between our Heavy Forwarders. Now, I need to identify the sender ... See more...
i at all, I'm ingesting data using HEC in a distributed infratructure using a Load Balancer to distribute traffic from many senders between our Heavy Forwarders. Now, I need to identify the sender of each event, is there a meta-data that identify the hostname and IP address of each sender? I didn't find it in HEC documentation. Thank you for your support. Ciao. Giuseppe
I have a few servers that have universal forwarders that need to be updated where I can send the Application data to one Splunk environment and the OS logs to another environment.  I believe this is ... See more...
I have a few servers that have universal forwarders that need to be updated where I can send the Application data to one Splunk environment and the OS logs to another environment.  I believe this is possible but just want to know who to get this done.  I'm assuming the inputs.conf and outputs.conf need to be updated.  Just looking for guidance.
Hi Ricfez,   sorry i forgot to add more detail to this but no the ip hasn't changed or the hostname of the firewalls, running a tcpdump i can see the logs are hitting my Sc4s(on-prem) to our splunk... See more...
Hi Ricfez,   sorry i forgot to add more detail to this but no the ip hasn't changed or the hostname of the firewalls, running a tcpdump i can see the logs are hitting my Sc4s(on-prem) to our splunk could instance  , however on the fw themselves the format of the logs were set as "splunk" maybe this could have an effect? 
Hi @uagraw01, if, in the same period, you're receiving the other logs in the other not internal indexes, this means that you have a congestion of data and internal logs (having a minor priority) are... See more...
Hi @uagraw01, if, in the same period, you're receiving the other logs in the other not internal indexes, this means that you have a congestion of data and internal logs (having a minor priority) are skipper, check the queues in your Forwarders. Ciao. Giuseppe
Answering my query, as I hope this could help others. The oracle 19c RAC database had SCAN address ( Single Client Access Name ) and needed additional firewall rules from the splunk servers to actua... See more...
Answering my query, as I hope this could help others. The oracle 19c RAC database had SCAN address ( Single Client Access Name ) and needed additional firewall rules from the splunk servers to actual physical oracle servers, as the jdbc driver in dbconnect needs access to it to establish connection. Once firewall rules are implemented, dbconnect worked.
Hello, 1) What is the difference between using "| summaryindex" and "| collect"? Thank you for your help. Summaryindex is generated by a scheduled report. I clicked "view recent" and the follo... See more...
Hello, 1) What is the difference between using "| summaryindex" and "| collect"? Thank you for your help. Summaryindex is generated by a scheduled report. I clicked "view recent" and the following is appended after the search.   | summaryindex spool=t uselb=t addtime=t index="summary" file="summary_test_1.stash_new" name="summary_test_1" marker="hostname=\"https://test.com/\",report=\"summary_test_1\""   Collect can be used to push outside of a scheduled report. 2) Can "| summary index" also be used to push data outside of a scheduled report?   | collect index=summary_test_1 testmode=false marker="report=summary_test_1"  
Hi, Need your assistance below We have created new csv lookup and we are using the below query but we are getting  all the data from the index & sourcetype . we need to get the events only for ... See more...
Hi, Need your assistance below We have created new csv lookup and we are using the below query but we are getting  all the data from the index & sourcetype . we need to get the events only for the hosts which mentioned on the lookup is the requirement Lookup name : Win_inventory.CSV used only one column called Server_name index=Nagio sourcetype=nagios:core:hard |lookup Win_inventory.CSV Server_name as host_name OUTPUTNEW Server_name. Server_name is not an existing interesting field
Am getting a warning of  DateParserVerbose - Accepted time (Wed Feb 14 17:01:12 2024) is suspiciously far away from previous event (Thu jan 18 17:01:12 2024) is still acceptable because it was e... See more...
Am getting a warning of  DateParserVerbose - Accepted time (Wed Feb 14 17:01:12 2024) is suspiciously far away from previous event (Thu jan 18 17:01:12 2024) is still acceptable because it was extracted by the same pattern  Is there any configuration that can help take this error away in splunk  
From last two days I am not receiving data in my Splunk internal index.  Please help me understand this issue .  
I have a use case where I want return multiple values from drop down. Label is same but I want to return more then one values    I tried to secondary value like this but its not working.   "stati... See more...
I have a use case where I want return multiple values from drop down. Label is same but I want to return more then one values    I tried to secondary value like this but its not working.   "statics": [],         "label": ">primary | seriesByName(\"id\") | renameSeries(\"label\") | formatByType(formattedConfig)",         "value": ">primary | seriesByName(\"id\") | renameSeries(\"value\") | formatByType(formattedConfig)",         "value1": ">secondery | seriesByName(\"link\") | renameSeries(\"value\") | formatByType(formattedConfig)"
Hi at all, I have to create a custom field at index time, I did it following the documentation but there's something wrong. The field to read is a parte of the source field 8as you can read in the ... See more...
Hi at all, I have to create a custom field at index time, I did it following the documentation but there's something wrong. The field to read is a parte of the source field 8as you can read in the REGEX. I deployed using a Deployment Server on my Heavy Forwarders an app contaning the following files: fields.conf props.conf transforms.conf in fields.conf I inserted   [fieldname] INDEXED = True   in props.conf I inserted:   [default] TRANSFORMS-abc = fieldname   in transforms.conf I inserted:   [fieldname] REGEX = /var/log/remote/([^/]+)/.* FORMAT = fieldname::$1 WRITE_META = true DEST_KEY = fieldname SOURCE_KEY = source REPEAT_MATCH = false LOOKAHEAD = 100    where's the error? what I missed? Thank you for your help. Ciao. Giuseppe
Hi @SplunkExplorer  > Where it sends logs? Directly to Indexers? To a HF? A Splunk UF generally will send the logs to indexer. but if your indexer is overloaded and if you want to do some preproces... See more...
Hi @SplunkExplorer  > Where it sends logs? Directly to Indexers? To a HF? A Splunk UF generally will send the logs to indexer. but if your indexer is overloaded and if you want to do some preprocessing beforehand, then you should use a HF(from UF, send the logs to HF.. HF will do some parsing tasks, then it will send the logs to indexer) > A UF is installed on it? If not, how it send logs? WMI? WEF? Other yes, WMI options is available. and if you can not install the UF, then you can use a syslog server to collect the logs from all systems that dont have UF and send it to a HF or indexer.. 
Hello, I am new to splunk and noticed we have two different authentication.conf files in the local folder.  I compared them two and they are the same except for information regarding groupbasedn and ... See more...
Hello, I am new to splunk and noticed we have two different authentication.conf files in the local folder.  I compared them two and they are the same except for information regarding groupbasedn and the host. Also the bind password doesn’t get hashed after splunk restart in one of the files. Unfortunately I don’t have more information on this and trying to understand what’s the reason for the 2nd file. If the bind password is updated in the 1st file, we are still not able to access splunk which makes me thing the 2nd file is needed for some reason. Would anyone be able to suggest the reasons for the 2nd file? I wonder if I need it. I’m concern with using the 2nd file as well since the password doesn’t get hashed. Thank you for your help! 
Changing MAC address shouldn't probably affect anything, but changing IP addresses might. In any case, I'd start with your firewall - how is it configured to send syslog, to what address specificall... See more...
Changing MAC address shouldn't probably affect anything, but changing IP addresses might. In any case, I'd start with your firewall - how is it configured to send syslog, to what address specifically? Is it actually doing so?  You basically just need to follow the path the data is supposed to take and find out where it's failing. That may lead directly to Splunk Cloud, with Splunk Cloud listening on a network port.  https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/HowSplunkEnterprisehandlessyslogdata In that case you might have to adjust the IP allow list in Splunk Cloud.  https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Config/ConfigureIPAllowList It's also possible the device is sending its logs to a local syslog server, which has a splunk forwarder installed and which then sends the logs in to Splunk.  If that's the case, then the problem is most likely with the firewall - either as mentioned earlier it's not actually sending syslog, or I guess it's also possible in the firewall swap a rule or two has been missed. And if that's the case (FW -> syslog locally, Splunk forwarder -> Splunk Cloud) it's not likely anything in the last half of that is broken but you could check for something simple like a forwarder that just got all jammed up and confused and needs a restart. But as mentioned - start with your firewall's syslog settings and work your way through the syslog data flow and I'm sure you'll find it.
Hi @lbrhyne, I also opened a case to Splunk Support and thay said that this behavior is all normal! Please push also you so maybe they will understand! I'm discussing with them because for me this... See more...
Hi @lbrhyne, I also opened a case to Splunk Support and thay said that this behavior is all normal! Please push also you so maybe they will understand! I'm discussing with them because for me this is a bug, also because, if you create a field extraction, using the regex101 regex (it's the only one that runs in field extractions!) and then you try to use the IFX, you'll have a red error! Ciao. Giuseppe
@gcusello Thank You! Your solution worked, partly. At search time this worked perfectly:  Using - nTimeframe\s+\(\w+\)\s+\w+\s+\w+\s+\%\s+\w+\\\\\w+\\\\\w+\\\\\w+\\\\\w\d+\:\d+\-\d+\:\d+\\\\\w+\\\\... See more...
@gcusello Thank You! Your solution worked, partly. At search time this worked perfectly:  Using - nTimeframe\s+\(\w+\)\s+\w+\s+\w+\s+\%\s+\w+\\\\\w+\\\\\w+\\\\\w+\\\\\w\d+\:\d+\-\d+\:\d+\\\\\w+\\\\\w+\\\\\w+\\\\\w(?P<Successful>\d+) However, neither the regex above or the following worked as a field extract:  Regex101 - nTimeframe\s+\(\w+\)\s+\w+\s+\w+\s+\%\s+\w+\\\w+\\\w+\\\w+\\\w\d+\:\d+\-\d+\:\d+\\\w+\\\w+\\\w+\\\w(?P<Successful>\d+) I have opened up a ticket with Splunk to see  if they can figure it out. For now I will be using the search time extraction. If Splunk provides a solution, I will post an update.
We noticed this TA error issue on Splunk Cloud as soon as this TA was installed. So there is no license issue in our environment.  A quick test is to install this TA in a valid licensed environment a... See more...
We noticed this TA error issue on Splunk Cloud as soon as this TA was installed. So there is no license issue in our environment.  A quick test is to install this TA in a valid licensed environment and see it break search. The OP may have had a coincidental License error.