All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi gcusello,   somehow this solution is not working for me. in my first lookup table I have 1000 firewall count and in second lookup file 850 firewall count. I manually checked in spreadsheet by co... See more...
Hi gcusello,   somehow this solution is not working for me. in my first lookup table I have 1000 firewall count and in second lookup file 850 firewall count. I manually checked in spreadsheet by comparing each other and found there is only 1 firewall is not available in first lookup so my solution should be out of 1000 firewall 849 firewall are matching and 1 is not hence it should display like; Firewall Name which is not matching   Count of FW which is not matching  Count of FW which is matching  ABCDFW                                                            1 all reaming firewalls                                                                                                                  849                                                                                 hope you understand my requirement now.
Hi, I have logger statements like below: Event data - {"firstName":"John","lastName":"Doe"}   My query needs <rex-statement> where double quotes (") in the logs are parsed and the two fields are ... See more...
Hi, I have logger statements like below: Event data - {"firstName":"John","lastName":"Doe"}   My query needs <rex-statement> where double quotes (") in the logs are parsed and the two fields are extracted in a table: index=my-index "Event data -" | rex <rex-statement> | fields firstName, lastName | table firstName, lastName    Please let me know what <rex-statement> do I have to put. Thanks in advance.  
https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf * dotall (?s) and multi-line (?m) modifiers are added in front of the regex. So internally, the regex becomes (?ms)<regex>.  So... See more...
https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf * dotall (?s) and multi-line (?m) modifiers are added in front of the regex. So internally, the regex becomes (?ms)<regex>.  So if your regex doesn't match, there might be something not 100% OK with it. It almost checks out on regex101 but it warns about possible necessity of escaping the included slashes. So I'd start with verifying that.
You mix two different things. One is blacklisting by eventID blacklist=4627,4688 or blacklist3=4627,4688 (of course it can be blacklist1 all the way to blacklist9). That should work for any even... See more...
You mix two different things. One is blacklisting by eventID blacklist=4627,4688 or blacklist3=4627,4688 (of course it can be blacklist1 all the way to blacklist9). That should work for any event format. The other format is filtering based on event's contents (which might also include the EventID field). And the equivalent would be blacklist=EventCode=%^(4627|3688)$% You can of course specify a different delimiter for your regex so it might be for example blacklist=EventCode=+^(4627|3688)$+
If you don't have any retention parameters explicitly set Splunk uses defaults so you always have some lifetime limits. But to the point. Unless you enable some form of input debugging, Splunk d... See more...
If you don't have any retention parameters explicitly set Splunk uses defaults so you always have some lifetime limits. But to the point. Unless you enable some form of input debugging, Splunk doesn't log every single input file read. And even if it did, it would go to the _internal index which is by default kept for only 30 days. So your best bet would be probably to find what host the events came from and look in its forwarder's config. But that's not a 100% foolproof solution since all metadata fields can be arbitrarily manipulated so theoretically, your data could have been, for example pushed via HEC by some external mechanism.
OK. Because you're posting those config snippets a bit chaotically. Please do a splunk btool inputs list splunktcp and splunk btool outputs list splunktcp On both of your components. And while ... See more...
OK. Because you're posting those config snippets a bit chaotically. Please do a splunk btool inputs list splunktcp and splunk btool outputs list splunktcp On both of your components. And while posting the results here please use either code block (the </> sign on top of the editing window here on the Answers forum) or the "preformatted" paragraph style. Makes it way easier to read.
Hi @Tyrian01 , if you indexed that file and there were some content and you didn't exceeded the retention time, the log should be in your index. If there isn't check the three above conditions. I ... See more...
Hi @Tyrian01 , if you indexed that file and there were some content and you didn't exceeded the retention time, the log should be in your index. If there isn't check the three above conditions. I don't think that this information is in the Splunk log file, because they surely have a much minor retention then the other data. let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking. Giuseppe P.S.: Karma Points are appreciated
Hi @yuanliu ,    The code in macro is just the index name and source type and with several fields, as our need is only host, so we need that,    same code in macro 2 as well, only the source type v... See more...
Hi @yuanliu ,    The code in macro is just the index name and source type and with several fields, as our need is only host, so we need that,    same code in macro 2 as well, only the source type varies macro 1 index=sap sourcetype=1A* maceo 2 index=sap sourcrtype=2A* Thanks in Advance!
Thanks Giuseppe, The wildcard in the search returns the same information. There's no retention issues on that index (no maximum on index size). I'd assume the information is available in a Splunk l... See more...
Thanks Giuseppe, The wildcard in the search returns the same information. There's no retention issues on that index (no maximum on index size). I'd assume the information is available in a Splunk log file, not the indexed data.  Thanks again Simon  
Hi, @Ryan.Paredez  Thanks for your message, but this problem is not solved by installing Event Service from the Enterprise Console  it solved to install ES manually and connect it to the controller... See more...
Hi, @Ryan.Paredez  Thanks for your message, but this problem is not solved by installing Event Service from the Enterprise Console  it solved to install ES manually and connect it to the controller 
Like @meetmshah mentioned create a new tag or field in the notable that defines which team will work in it. Once in place create a filter in incident review dashboard with that team tag or field and ... See more...
Like @meetmshah mentioned create a new tag or field in the notable that defines which team will work in it. Once in place create a filter in incident review dashboard with that team tag or field and let the respective teams select and work on those filtered incidents.
I usually encounter this issue when there’s an issue with the KV store. Fixing the issue of KV store resolves this. I also clean-up my browsers cache to ensure nothing browser really is causing the ... See more...
I usually encounter this issue when there’s an issue with the KV store. Fixing the issue of KV store resolves this. I also clean-up my browsers cache to ensure nothing browser really is causing the issue.
@bowesmana  Thanks for help  but I need the output in AM and PM sequence.Here is my actual output  01:00:02 AM 9.14 01:00:02 PM 12.06 01:05:02 AM 10.00 01:05:02 PM 11.17 01:10:02... See more...
@bowesmana  Thanks for help  but I need the output in AM and PM sequence.Here is my actual output  01:00:02 AM 9.14 01:00:02 PM 12.06 01:05:02 AM 10.00 01:05:02 PM 11.17 01:10:02 AM   I except the output to be in first all the AM time should be display and followed by PM 01:00:02 AM 9.14 01:00:02 AM 12.06 01:05:02 AM 10.00 01:05:02 PM 11.17 01:10:02 PM
I know the purpose of blacklist 
Event Code Watchlist: Think of your computer as a detective, always keeping an eye on what's happening. EventCodes are like clues or signals. Blacklist = Unwanted Events: Blacklisting is sa... See more...
Event Code Watchlist: Think of your computer as a detective, always keeping an eye on what's happening. EventCodes are like clues or signals. Blacklist = Unwanted Events: Blacklisting is saying, "I don't want these specific clues or signals." It's like telling the detective to ignore certain types of information. Filtering Out Unwanted Stuff: Imagine you're sorting through mail. Blacklisting is like throwing away letters from certain senders you don't want to hear from. Improving Focus: By blacklisting EventCodes, you're helping your computer focus on the events that matter and ignoring the ones that don't. Less Noise, More Clarity: It's like reducing background noise so you can hear the important stuff clearly. Blacklisting helps your computer concentrate on significant events.
Can you provide me ant suggestions to resolve this issue?
So I'm new to the splunk on GCP still learning, one thing I'm trying to wrap my head around is this: GCP pubsub provides native support for HTTP push - it's pretty straightforward. Now Splunk GCP ha... See more...
So I'm new to the splunk on GCP still learning, one thing I'm trying to wrap my head around is this: GCP pubsub provides native support for HTTP push - it's pretty straightforward. Now Splunk GCP has the dataflow template which seems to be a data pipeline that just re-format the logs and push it through the Splunk HEC which is HTTP endpoint. From architectural pov,  introducing  dataflow template into the GDI is an extra layer when the log export seemingly can be done by pubsub http push, so what is the specific value add from dataflow template?
Hello, I have some issues to perform multi-line field extraction for XML, my in-line extraction is not getting any result; sample events and my in-line extraction are provided below. Any help would ... See more...
Hello, I have some issues to perform multi-line field extraction for XML, my in-line extraction is not getting any result; sample events and my in-line extraction are provided below. Any help would be appreciated.  Sample Events: <Event> <ID>0123011</ID> <Time>2023-10-28T05:22:37.97011</Time> <Application_Name>Test</Application_Name> <Host_Name>VS0SMADBEFT</Host_Name> </Event> <Event> <ID>01232113</ID> <Time>2023-10-28T05:22:37.99011</Time> <Application_Name>Test</Application_Name> <Host_Name>VS0SMADBEFT</Host_Name> </Event>   In Line Extraction I Used <ID>(?<ID>[^<]+)<\/ID>([\r\n]*)<Time>(?<Time>[^<]+)</Time>([\r\n]*)<Application_Name>(?<Application_Name>[^<]+)</Application_Name>([\r\n]*)<Host_Name>(?<Host_Name>[^<]+)</Host_Name>    
OH..! I get this token at <single> block. The code for this <signle> block looks like this. <single> <title>Max time</title> <search> <query>index=idx_prd_analysis sourcetype="type:prd_anal... See more...
OH..! I get this token at <single> block. The code for this <signle> block looks like this. <single> <title>Max time</title> <search> <query>index=idx_prd_analysis sourcetype="type:prd_analysis:delay_time" corp="delay" | where (plane_type==1) OR (plane_type==2) | eval total_time = round(takeOff_time - boarding_time, 3) | stats MAX(total_time)</query> <earliest>$_time.earliest$</earliest> <latest>$_time.latest$</latest> </search> <option name="colorMode">block</option> <option name="drilldown">all</option> <option name="height">154</option> <option name="numberPrecision">0.000</option> <option name="rangeValues">[500]</option> <option name="refresh.display">progressbar</option> <option name="unitPosition">before</option> <drilldown> <set token="max_value">$click.value$</set> </drilldown> </single>    Are there any solution to send max_value as normal format not origin? (Not 123,456 -> Wnat 123456)
Hello all! This will be a doozy, so get ready. We are running a search with tstats generated results,  from various troubleshooting we simplified it to the following     | tstats count by host | ... See more...
Hello all! This will be a doozy, so get ready. We are running a search with tstats generated results,  from various troubleshooting we simplified it to the following     | tstats count by host | rename host as hostname | outputlookup some_kvstore     The config of the kvstore is as follows:     # collections.conf [some_kvstore] field.hostname = string         # transforms.conf [some_kvstore] collection = some_kvstore external_type = kvstore fields_list = hostname     When you run the first 2 lines of the SPL, you will get quite a few results, as it queries the internal db for hosts and retrieves a count of their logs. After you add the outputlookup command, it removes all your results and will not add them to the kvstore.  As my coworker found, there is a way to write the results to the kvstore after all, however the SPL for that is quite cursed, as it involves joining the original search back in, but the new results will be written to the kvstore.     | tstats count by host | rename host as hostname | table hostname | join hostname [ tstats count by host | rename host as hostname] | outputlookup some_kvstore       As far as I aware, 9.1.2, 9.0.6, and latest verisions of cloud have this issue even as fresh installs of Splunk, however it does work on an 8.2.1 and 7.3.3 systems (dont ask). The Splunk user owns everything in the Splunkdir so there is no problem with writing to any files, the kvstore permissions are global, and any user can read or write to it. So after several hours of troubleshooting, we are stumped here and not sure where we should look next. Changing to a csv is unfortunately not an option.   Things we have tried so far, that i can remember: Completely fresh installs of Splunk Cleaning the kvstore via `splunk clean kvstore -local` Outputting to a csv (works) Using makeresults to create the fields manually and add to the kvstore (works) Using the noop command to disable all search optimization  Writing to the kvstore via API (works) Reading data from the kvstore via inputlookup (works) Modifying an entry in the kvstore via the lookup editor app (works) Testing with all search modes (fast, smart, verbose)