All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, Is it somehow possible to find difference between two or more amounts from different events when the events are sorted by time span of 20s.     index=Prod123 methodType='WITHDRAW' curre... See more...
Hi, Is it somehow possible to find difference between two or more amounts from different events when the events are sorted by time span of 20s.     index=Prod123 methodType='WITHDRAW' currency='GBP' jurisdiction=UK transactionAmount>=3 | fieldformat _time = strftime(_time, "%Y-%m-%d %H:%M:%S") | bin span=20s _time | search transactionAmount=* | stats list(transactionAmount) as Total, list(currency) as currency, list(_time) as Time, dc(customerId) as Users by _time | fieldformat Time = strftime(Time, "%Y-%m-%d %H:%M:%S") | fieldformat _time = strftime(_time, "%Y-%m-%d %H:%M:%S") | search Users>=2 | sort - Time     I would like to have it show the difference between the Totals. Like the first one Total 3.8 and 11.2. Is it possible to make it work somehow or would it be better with streamstats and window?  I was thinking about using sum or avg as an option also. Thank you, 
Sure @richgalloway . .  Feedback submitted, thanks! EDIT - got reply from Splunk Docs team Jen that docs are updated, i have verified it as well. so, accepting this as solution. thanks. 
Yes, that's a typo.  Submit feedback on that documentation page.
Hi All, I am looking for some dashboards showing the usage of Apps and it's dashboards by User so that I can decommission unused Apps. The below solution is only extracting very partial information... See more...
Hi All, I am looking for some dashboards showing the usage of Apps and it's dashboards by User so that I can decommission unused Apps. The below solution is only extracting very partial information like 15%. Solved: See User Activity by App and View - Splunk Community   Can someone please help. Regards, Devang   @tnesavich_splun 
I would trust Splunk Support over random Internet people.  Use the CLI to get rid of the old data and let the updated settings take care of new data.  Problem solved.
I apologize for the confusion, yes Splunk infrastructure is placed on the isolated network.  I am trying to find the best solution, i have a few servers and workstations and using windows and would l... See more...
I apologize for the confusion, yes Splunk infrastructure is placed on the isolated network.  I am trying to find the best solution, i have a few servers and workstations and using windows and would like to monitor windows events,  network monitoring and currently using a MYSQL. and trying to see how I can use Splunk 
ITWhisperer, Yeah that doesn't work but now I realize its because there is a file path reference: source::X:\logs\[some IP]\log123.txt|host::[host]
Try it like this | rex "Context: source::(?P<sourcetypeissue>\w+)\Shost::(?P<sourcehost>\w+)"
Hi, Many thanks for posting. This fixed the issue for us as well. It should however only be necessary to create a /etc/system/local/web.conf (or edit this one if already existing) and add: #Workar... See more...
Hi, Many thanks for posting. This fixed the issue for us as well. It should however only be necessary to create a /etc/system/local/web.conf (or edit this one if already existing) and add: #Workaround for toolbar not loading after 9.x upgrade [settings] minify_js = False As always you should not modify anything in ./default in your Splunk instance unless you're an app developer creating an app. Regards
Hi Fellow Splunkers, Have a hopefully quick question: Want to pull out the source and host from the Windows _internal splunk logs, but my rex (cribbed from a post on here) isn't working.   inde... See more...
Hi Fellow Splunkers, Have a hopefully quick question: Want to pull out the source and host from the Windows _internal splunk logs, but my rex (cribbed from a post on here) isn't working.   index=_internal host IN (spfrd1, spfrd2) source="*\\Splunk\\var\\log\\splunk\\splunkd.log" component=DateParserVerbose | rex "Context: source=(?P<sourcetypeissue>\w+)\Shost=(?P<sourcehost>\w+)" | stats list(sourcetypeissue) as file_name list(sourcehost)   But I get no stats, my events look like this:   08-24-2022 07:50:20.383 -0400 WARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Sun Aug 24 07:49:58 2022). Context: source::WMI:WinEventLog:Security|host::SPFRD1|WMI:WinEventLog:Security|1    
This is an almost impossible ask as it depends on what scenarios you want to investigate and which of these apps are and are not involved.
Hello Splunk Community, I hope this message finds you well. I'm currently working on enhancing my workflow in the Search and Reporting app, specifically when using the datamodel command. I'm looking... See more...
Hello Splunk Community, I hope this message finds you well. I'm currently working on enhancing my workflow in the Search and Reporting app, specifically when using the datamodel command. I'm looking to streamline the process of adding fields to my search through simple clicks within the app. e.g.: | datamodel summariesonly=t allow_old_summaries=t Windows search | search All_WinEvents.src_user="windows_user" All_WinEvents.EventCode="5140" and I'd like to extend it with All_WinEvents.action="success" but without typing it in but using the search and reporting app itself. I've noticed that when I interactively add fields, the query tends to extend based on indexed fields rather than the datamodel fields. My goal is to understand if there's a way to make this process more datamodel-centric. Is there a way to configure or adjust settings so that when I click to add fields in the Search and Reporting app, it extends the query based on the datamodel command rather than defaulting to indexed fields? e.g result.: | datamodel summariesonly=t allow_old_summaries=t Windows search | search All_WinEvents.src_user="windows_user" All_WinEvents.EventCode="5140"  All_WinEvents.action="success" Any insights, tips, or guidance on achieving this would be highly appreciated. Thank you in advance for your assistance! Best regards,
What is the retention period on your index - you may need to extend it beyond 3 months. Alternatively, create a report to "archive" the essential information to a summary index with a longer retentio... See more...
What is the retention period on your index - you may need to extend it beyond 3 months. Alternatively, create a report to "archive" the essential information to a summary index with a longer retention period.
If you have a whole Splunk infrastructure which is placed in an isolated network segment with no external connectivity - yes, you can use apps and addons. You just have to manually "insert" them into... See more...
If you have a whole Splunk infrastructure which is placed in an isolated network segment with no external connectivity - yes, you can use apps and addons. You just have to manually "insert" them into the environment (for example pass them on a USB stick). In bigger installations you typically don't download the apps directly onto your Splunk servers but distribute them using builtin Splunk mechanisms (deployment server, SH deployer, cluster master) or using your favourite configuration management tool like ansible.
Hi All  I need to do some lookup table maintenance and would like to know which hosts are not being monitored but still in the lookup table  My problem is I have host fields that has an "*", I.E... See more...
Hi All  I need to do some lookup table maintenance and would like to know which hosts are not being monitored but still in the lookup table  My problem is I have host fields that has an "*", I.E. host=saps*  that are valid and are being monitored  Here is my SPL  -----------------------------------------------------  | inputlookup host_lookup | eval host=lower(host) | join host type=left [| metasearch (index=os_* OR index=perfmon_*) | dedup host | eval host=lower(host) | eval eventTime=_time | convert timeformat="%Y/%m/%d %H:%M:%S" ctime(eventTime) AS LastEventTime | fields host eventTime LastEventTime index] | eval Action=case(eventTime>200, "Keep Host", isnull(eventTime) , "Remove from Lookup") | fields Action host LastEventTime ----------------------------------------------------- 
Yes, thought about fillnull myself. The difference is that fillnull only fills events where there is no value at all whereas the if-based eval can just sort the verified ones from all the rest (even ... See more...
Yes, thought about fillnull myself. The difference is that fillnull only fills events where there is no value at all whereas the if-based eval can just sort the verified ones from all the rest (even if you have many other possible values like "unverified", "half-verified", "maybe verified but I really don't know" and so on ;-)). So depending on the use case either of the solutions can be appropriate.
Hi @Radhika.Bhatia, Multiple tickets are getting created for the same issue from Appd.  Is there any way to create the next ticket once the current ticket is resolved for the same issue? or wait fo... See more...
Hi @Radhika.Bhatia, Multiple tickets are getting created for the same issue from Appd.  Is there any way to create the next ticket once the current ticket is resolved for the same issue? or wait for next 24hrs to create the 2nd ticket? Thank you
Thanks for the reply. I am just curious if I Splunk is already installed on the isolated network if I need internet connectivity to download the add ons/apps etc ?
I am trying to host Prometheus metrics on a Splunk app such that the metrics are available at `.../my_app/v1/metrics` endpoint. I am able to create a handler of type PersistentServerConnectionApplic... See more...
I am trying to host Prometheus metrics on a Splunk app such that the metrics are available at `.../my_app/v1/metrics` endpoint. I am able to create a handler of type PersistentServerConnectionApplication and have it return Prometheus metrics. The response status, however, code = `500` and content = `Unexpected character while looking for value: '#'` Prometheus metrics do not confirm to any of the supported `output_modes` (atom | csv | json | json_cols | json_rows | raw | xml) so I get the same error irrespective of the output mode chosen. Is there a way to bypass the output check? Is there any other alternative to host a non-confirming-format output via a Splunk REST API?
The best way to understand the choice made by chart command is to draw a chart manually.  If you cannot draw a chart with two group-by series, chart is correct. (Same with timechart.  I also wonder w... See more...
The best way to understand the choice made by chart command is to draw a chart manually.  If you cannot draw a chart with two group-by series, chart is correct. (Same with timechart.  I also wonder why you opt to use chart over _time instead of just timechart.)  If you can draw such a chart, chances are that it should either be a stats chart as @SanjayReddy suggested - stats can also use _time, just not in the same form as chart over _time; or it would be something like @gcusello suggested, i.e., "banding" two series into a single series.