All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for your response @gcusello. Could you suggest how to use spath for the same json log? How to extract json out of the log statement and further individual fields. I am not able to share the a... See more...
Thanks for your response @gcusello. Could you suggest how to use spath for the same json log? How to extract json out of the log statement and further individual fields. I am not able to share the actual logs, since they are client specific.
It's not your app, it's a problem on Splunk's side. See the thread https://community.splunk.com/t5/Installation/500s-when-updating-apps-from-GUI-with-walkaround/m-p/630634 EDIT: If it's happening o... See more...
It's not your app, it's a problem on Splunk's side. See the thread https://community.splunk.com/t5/Installation/500s-when-updating-apps-from-GUI-with-walkaround/m-p/630634 EDIT: If it's happening on Cloud and you can't apply the provided walkaround, raise a case.
Hi @bharath_hk12 , at first, this seems to be a json log, so you can use the spath command (https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Spath). Anyway, you could use a regex ... See more...
Hi @bharath_hk12 , at first, this seems to be a json log, so you can use the spath command (https://docs.splunk.com/Documentation/Splunk/9.1.2/SearchReference/Spath). Anyway, you could use a regex like the following: | rex "\"firstName\":\"(?<firstname>[^\"]+)\".\"lastName\":\"(?<lastName>[^\"]+)" that you can test at https://regex101.com/r/XwqxZB/1 to be more sure, you should share some complete samples of your logs (not partial). Ciao. Giuseppe
what if i want to get the difference between _indextime & start_time, i am trying with this : tostring(strftime(_indextime, "%Y/%m/%d %H:%M:%S") - strptime(start_time, "%Y/%m/%d %H:%M:%S"), "duratio... See more...
what if i want to get the difference between _indextime & start_time, i am trying with this : tostring(strftime(_indextime, "%Y/%m/%d %H:%M:%S") - strptime(start_time, "%Y/%m/%d %H:%M:%S"), "duration") but getting error like- Encountered the following error while trying to save: Operator requires numeric types My search is: index=xxx_xxx_firewall sourcetype IN(xxx:xxxxx, xxx:xxxxx) | eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S") | eval it = strptime(start_time, "%Y/%m/%d %H:%M:%S") | eval ot = strptime(receive_time, "%Y/%m/%d %H:%M:%S") | eval diff = tostring((ot - it), "duration") | table start_time, receive_time,indextime,_time, diff   Can you please help if you have some insights on this.
We talked about it with @bowesmana on Slack and it seems the behaviour is intentional and is docummented (albeit a bit vaguely) - "Use the event order functions to return values from fields based on ... See more...
We talked about it with @bowesmana on Slack and it seems the behaviour is intentional and is docummented (albeit a bit vaguely) - "Use the event order functions to return values from fields based on the order in which the event is processed, which is not necessarily chronological or timestamp order. " (from https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Eventorderfunctions )
Hi gcusello,   somehow this solution is not working for me. in my first lookup table I have 1000 firewall count and in second lookup file 850 firewall count. I manually checked in spreadsheet by co... See more...
Hi gcusello,   somehow this solution is not working for me. in my first lookup table I have 1000 firewall count and in second lookup file 850 firewall count. I manually checked in spreadsheet by comparing each other and found there is only 1 firewall is not available in first lookup so my solution should be out of 1000 firewall 849 firewall are matching and 1 is not hence it should display like; Firewall Name which is not matching   Count of FW which is not matching  Count of FW which is matching  ABCDFW                                                            1 all reaming firewalls                                                                                                                  849                                                                                 hope you understand my requirement now.
Hi, I have logger statements like below: Event data - {"firstName":"John","lastName":"Doe"}   My query needs <rex-statement> where double quotes (") in the logs are parsed and the two fields are ... See more...
Hi, I have logger statements like below: Event data - {"firstName":"John","lastName":"Doe"}   My query needs <rex-statement> where double quotes (") in the logs are parsed and the two fields are extracted in a table: index=my-index "Event data -" | rex <rex-statement> | fields firstName, lastName | table firstName, lastName    Please let me know what <rex-statement> do I have to put. Thanks in advance.  
https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf * dotall (?s) and multi-line (?m) modifiers are added in front of the regex. So internally, the regex becomes (?ms)<regex>.  So... See more...
https://docs.splunk.com/Documentation/Splunk/Latest/Admin/Propsconf * dotall (?s) and multi-line (?m) modifiers are added in front of the regex. So internally, the regex becomes (?ms)<regex>.  So if your regex doesn't match, there might be something not 100% OK with it. It almost checks out on regex101 but it warns about possible necessity of escaping the included slashes. So I'd start with verifying that.
You mix two different things. One is blacklisting by eventID blacklist=4627,4688 or blacklist3=4627,4688 (of course it can be blacklist1 all the way to blacklist9). That should work for any even... See more...
You mix two different things. One is blacklisting by eventID blacklist=4627,4688 or blacklist3=4627,4688 (of course it can be blacklist1 all the way to blacklist9). That should work for any event format. The other format is filtering based on event's contents (which might also include the EventID field). And the equivalent would be blacklist=EventCode=%^(4627|3688)$% You can of course specify a different delimiter for your regex so it might be for example blacklist=EventCode=+^(4627|3688)$+
If you don't have any retention parameters explicitly set Splunk uses defaults so you always have some lifetime limits. But to the point. Unless you enable some form of input debugging, Splunk d... See more...
If you don't have any retention parameters explicitly set Splunk uses defaults so you always have some lifetime limits. But to the point. Unless you enable some form of input debugging, Splunk doesn't log every single input file read. And even if it did, it would go to the _internal index which is by default kept for only 30 days. So your best bet would be probably to find what host the events came from and look in its forwarder's config. But that's not a 100% foolproof solution since all metadata fields can be arbitrarily manipulated so theoretically, your data could have been, for example pushed via HEC by some external mechanism.
OK. Because you're posting those config snippets a bit chaotically. Please do a splunk btool inputs list splunktcp and splunk btool outputs list splunktcp On both of your components. And while ... See more...
OK. Because you're posting those config snippets a bit chaotically. Please do a splunk btool inputs list splunktcp and splunk btool outputs list splunktcp On both of your components. And while posting the results here please use either code block (the </> sign on top of the editing window here on the Answers forum) or the "preformatted" paragraph style. Makes it way easier to read.
Hi @Tyrian01 , if you indexed that file and there were some content and you didn't exceeded the retention time, the log should be in your index. If there isn't check the three above conditions. I ... See more...
Hi @Tyrian01 , if you indexed that file and there were some content and you didn't exceeded the retention time, the log should be in your index. If there isn't check the three above conditions. I don't think that this information is in the Splunk log file, because they surely have a much minor retention then the other data. let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking. Giuseppe P.S.: Karma Points are appreciated
Hi @yuanliu ,    The code in macro is just the index name and source type and with several fields, as our need is only host, so we need that,    same code in macro 2 as well, only the source type v... See more...
Hi @yuanliu ,    The code in macro is just the index name and source type and with several fields, as our need is only host, so we need that,    same code in macro 2 as well, only the source type varies macro 1 index=sap sourcetype=1A* maceo 2 index=sap sourcrtype=2A* Thanks in Advance!
Thanks Giuseppe, The wildcard in the search returns the same information. There's no retention issues on that index (no maximum on index size). I'd assume the information is available in a Splunk l... See more...
Thanks Giuseppe, The wildcard in the search returns the same information. There's no retention issues on that index (no maximum on index size). I'd assume the information is available in a Splunk log file, not the indexed data.  Thanks again Simon  
Hi, @Ryan.Paredez  Thanks for your message, but this problem is not solved by installing Event Service from the Enterprise Console  it solved to install ES manually and connect it to the controller... See more...
Hi, @Ryan.Paredez  Thanks for your message, but this problem is not solved by installing Event Service from the Enterprise Console  it solved to install ES manually and connect it to the controller 
Like @meetmshah mentioned create a new tag or field in the notable that defines which team will work in it. Once in place create a filter in incident review dashboard with that team tag or field and ... See more...
Like @meetmshah mentioned create a new tag or field in the notable that defines which team will work in it. Once in place create a filter in incident review dashboard with that team tag or field and let the respective teams select and work on those filtered incidents.
I usually encounter this issue when there’s an issue with the KV store. Fixing the issue of KV store resolves this. I also clean-up my browsers cache to ensure nothing browser really is causing the ... See more...
I usually encounter this issue when there’s an issue with the KV store. Fixing the issue of KV store resolves this. I also clean-up my browsers cache to ensure nothing browser really is causing the issue.
@bowesmana  Thanks for help  but I need the output in AM and PM sequence.Here is my actual output  01:00:02 AM 9.14 01:00:02 PM 12.06 01:05:02 AM 10.00 01:05:02 PM 11.17 01:10:02... See more...
@bowesmana  Thanks for help  but I need the output in AM and PM sequence.Here is my actual output  01:00:02 AM 9.14 01:00:02 PM 12.06 01:05:02 AM 10.00 01:05:02 PM 11.17 01:10:02 AM   I except the output to be in first all the AM time should be display and followed by PM 01:00:02 AM 9.14 01:00:02 AM 12.06 01:05:02 AM 10.00 01:05:02 PM 11.17 01:10:02 PM
I know the purpose of blacklist 
Event Code Watchlist: Think of your computer as a detective, always keeping an eye on what's happening. EventCodes are like clues or signals. Blacklist = Unwanted Events: Blacklisting is sa... See more...
Event Code Watchlist: Think of your computer as a detective, always keeping an eye on what's happening. EventCodes are like clues or signals. Blacklist = Unwanted Events: Blacklisting is saying, "I don't want these specific clues or signals." It's like telling the detective to ignore certain types of information. Filtering Out Unwanted Stuff: Imagine you're sorting through mail. Blacklisting is like throwing away letters from certain senders you don't want to hear from. Improving Focus: By blacklisting EventCodes, you're helping your computer focus on the events that matter and ignoring the ones that don't. Less Noise, More Clarity: It's like reducing background noise so you can hear the important stuff clearly. Blacklisting helps your computer concentrate on significant events.