All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This solved question seems to be what you're looking for:   Solved: How to get the Audit for Lookup files modification... - Splunk Community If you don't want any changes at all, and you're on a... See more...
This solved question seems to be what you're looking for:   Solved: How to get the Audit for Lookup files modification... - Splunk Community If you don't want any changes at all, and you're on a *nix system, can you deploy your lookup with read-only permissions on the file within the app?
The collect command does allow you to define a sourcetype.  Note that the stash sourcetype is special as it doesn't hit your license volume.  When you use collect with a different sourcetype Splunk c... See more...
The collect command does allow you to define a sourcetype.  Note that the stash sourcetype is special as it doesn't hit your license volume.  When you use collect with a different sourcetype Splunk considers it "new" data since you may not be generating summarizing statistics on data already indexed. Also, since this is tied to an alert, would using the Log Event Alert Action be sufficient?
@gcusello i found the similar solution of yours but i am unable to achieve for my usecase. Can you please help me in tweaking search for mine. https://community.splunk.com/t5/Alerting/How-do-you-d... See more...
@gcusello i found the similar solution of yours but i am unable to achieve for my usecase. Can you please help me in tweaking search for mine. https://community.splunk.com/t5/Alerting/How-do-you-detect-when-a-host-stops-sending-logs-to-Splunk/m-p/369071
It sounds like you will have to build an SPL query using the eventstats  command, or possibly the streamstats command.  Since I can't see your data I'm not sure what would be the best approach, but t... See more...
It sounds like you will have to build an SPL query using the eventstats  command, or possibly the streamstats command.  Since I can't see your data I'm not sure what would be the best approach, but there is a slight difference between these two commands.    Eventstats is like the stats command where it looks at all of your events matching found by your query, but it does not transform the stream, it just adds additional fields to every event. For example, you could count your up and bad events using eventstats by host.  Then, each event for that host would have the total counts on every event.  So if there were six up events, and seven bad events for a host, then each of those 13 events would have an up value of six and bad value of seven. Alternatively, streamstats only looks at events in the stream up to and including the point where you are in the stream - it doesn't know about "future" events in the result set.  This is good for stuff like running average, but has other uses.  So in your case, the first up event would have a count of 1, the second up event a count of two, the first bad event a count of 1, and so on...the last up would have a count of six and the last bad a count of seven. I know you mentioned duration...you can also add-up the time differences using these commands, too, by doing math on _time.
Based on your description it sounds like you are looking to utilize the drilldown actions for a visualization to change something on the existing page. While not exactly what you're doing, here's so... See more...
Based on your description it sounds like you are looking to utilize the drilldown actions for a visualization to change something on the existing page. While not exactly what you're doing, here's some posts around here  Solved: How to create a drill down from one panel to anoth... - Splunk Community Solved: Single value drilldown click to display and click ... - Splunk Community   Also a couple of external resources discussing how the tokens work: The Beginner’s Guide to Splunk Drilldowns With Conditions – Kinney Group Define Your Drilldown in Splunk: $click.value$ vs $click.value2$ – Kinney Group
Hi Splunkers I'm trying to send alerts data from one index to another using a macro For ex: The macro is having 4 arguments like below and would like to send data to new index called "newidx" usi... See more...
Hi Splunkers I'm trying to send alerts data from one index to another using a macro For ex: The macro is having 4 arguments like below and would like to send data to new index called "newidx" using collect command here is the macro called `newmacro` eval apple=xyz, banana=abc, mango=www, grape=123 | collect index=newidx the idea is wherever I reference this macro in an alert that exact alert raw data need to be copied to the newidx but the sourcetype always changes as stash instead of original. I don't see all original fields in summary index Is there any way to define a sourcetype something like |collect index=newidx sourcetype=$sourcetype$
Dear All, Unable to send data from universal forwarder, to Splunk Enterprise i have minimal knowledge in Splunk  I'm trying to configure universal forwarder but unable to success could you please he... See more...
Dear All, Unable to send data from universal forwarder, to Splunk Enterprise i have minimal knowledge in Splunk  I'm trying to configure universal forwarder but unable to success could you please help me in this regards Please find the below my configurations i am using Splunk Enterprise 9.0 and universal forwarder version 9.1.1using Cent OS 7.0 inputs.conf [root@Universalforwarders local]# [monitor:///var/log/messages] index=os disabled=0 outputs.conf [root@Universalforwarders local]# [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] [server://192.168.122.1:9997]   i used following command to check port status: netstat -an | grep 9997 tcp     0     0   0.0.0.0:9997   0.0.0.0:*   LISTEN localhost.localdomain --- my Splunk enterprise instance 127.0.0.1 --- my Splunk Universal forwarder   i want to know where i am doing mistake would be appreciate your kind support thanks in advance            
Hello, I have a lookup where all the hostnames are available under the field called "title" with respect to teams.I would like to set up an alert for team "abc" if any of the host stops reporting fo... See more...
Hello, I have a lookup where all the hostnames are available under the field called "title" with respect to teams.I would like to set up an alert for team "abc" if any of the host stops reporting for more than 15 mins, I tried the below search but unable to get the results.Can anyone please help me with the search, it is highly helpful. Search which i am using: | inputlookup 123.csv | search team="abc" | table title | rename title as host | appendpipe [ | stats count as islookupcount ] | eval current_time =now() | eval islookupcount = coalesce(islookupcount, 0) | search islookupcount = 0 | eventstats latest(_time) as last_event_time by host | where current_time - last_event_time > 900 | eval stopped_sending_time=strftime(current_time,"%Y-%m-%d %H:%M:%S") | table unit_id, host, stopped_sending_time please help me with the better search for my usecase may be i am not using the right one. Thanks
Embed in the dashboard
I am trying to create a dashboard panel that will have dropdowns different by the row you select.  I am using one of the searches that comes with the monitoring application as my search: index=_inte... See more...
I am trying to create a dashboard panel that will have dropdowns different by the row you select.  I am using one of the searches that comes with the monitoring application as my search: index=_internal sourcetype=splunkd TERM(group=tcpin_connections) TERM("cooked") OR TERM("cookedSSL") (hostname!=*.splunk*.*) | dedup hostname | stats c as fwdCount by version | rex field=version "^(?<fwdV>\d+.\d+)" | eval splV= [ | makeresults | eval VERSION=7.0 | append [ | rest splunk_server=local count=1 /services/server/info | stats max(version) as VERSION] | rex field=VERSION "^(?<version>\d+.\d+)" | stats max(version) as splV | return $$splV ] | eval fwd_7_3_eos=relative_time(strptime("22-Oct-2021", "%d-%b-%Y"), "+1d@d"), fwd_8_0_eos=relative_time(strptime("22-Oct-2021", "%d-%b-%Y"), "+1d@d"), fwd_8_1_eos=relative_time(strptime("19-Apr-2023", "%d-%b-%Y"), "+1d@d"), fwd_8_2_eos=relative_time(strptime("30-Sep-2023", "%d-%b-%Y"), "+1d@d"), fwd_9_0_eos=relative_time(strptime("14-Jun-2024", "%d-%b-%Y"), "+1d@d"), fwd_9_1_eos=relative_time(strptime("28-Jun-2025", "%d-%b-%Y"), "+1d@d"), fwd_default_eos=relative_time(strptime("01-Jan-1971", "%d-%b-%Y"), "+1d@d") | eval expTimestamp = case( match($$fwd_version$$, "^7\.3"), fwd_7_3_eos, match($$fwd_version$$, "^8\.0"), fwd_8_0_eos, match($$fwd_version$$, "^8\.1"), fwd_8_1_eos, match($$fwd_version$$, "^8\.2"), fwd_8_2_eos, match($$fwd_version$$, "^9\.0"), fwd_9_0_eos, match($$fwd_version$$, "^9\.1"), fwd_9_1_eos, 1==1, fwd_default_eos) | fields - fwd_*_eos | eval warn=case( (now() > expTimestamp), fwdCount, 1==1, 0) | eval info=fwdCount-warn | rename warn as "Out of date", info as "Up to date" | fields - fwdV, splV, fwdCount, expTimestamp   What I want to do is to drop down based on the row I select (see attached snapshot)
@PickleRick i have lots of microservice that work together.   when user search on my product log something like this that show flow of what modules processing user request: e.x front-end > search ... See more...
@PickleRick i have lots of microservice that work together.   when user search on my product log something like this that show flow of what modules processing user request: e.x front-end > search > db > report  
Haven't been able to find this, but I want to basically calculate up time percentage for a host based on 2 unique events.  One gets logged when something is bad, the other gets logged when everything... See more...
Haven't been able to find this, but I want to basically calculate up time percentage for a host based on 2 unique events.  One gets logged when something is bad, the other gets logged when everything is fine.   An example would be to have a host log 10 minutes of "ok" events, then 4 minutes of "bad" events, then 18 minutes of "on" events, etc. I need to out put the following based on the search range of the query. Host | total_ok_duration | total_bad_duration | percentage_ok_duration this need to be run and return for multiple hosts as well.
That's a tough problem. It's not a Splunk-tough problem but a generally tough problem. in order to find the matches... you need to do the comparisons. And that's the biggest problem here. Since you ... See more...
That's a tough problem. It's not a Splunk-tough problem but a generally tough problem. in order to find the matches... you need to do the comparisons. And that's the biggest problem here. Since you don't have a fixed field which you want to look up but want to use the lookup as a list of patterns to match against your whole raw event (at least that's how I interpret your requirement), you have to do m*n "searches" against your data where m is the number of your events and n is the number of distinct values in your lookup. If you know you can split the events into separate words, that might make it a bit easier because you don't have to match your raw event against terms from the lookup but rather do a lookup with the words from the event (which could be marginally faster since it's more probable than you'll match something before reaching the end of the lookup). There are several possible approaches here but I'm not sure which one would be fastest given the size of your data. The more events you have to match, the more it's tempting to create something matching cleverly over a sorted list of your terms from the lookup. (to make things a bit more complicated one has to remember that each "comparison" is also not an atomic operation but also depends on the length of the strings and the match ratio).
Hello, I am trying to change the color of the sparkline in my table. I don't have the option to change it using the dropdown menu under "Column-specific formatting" so I'm trying to do it through ... See more...
Hello, I am trying to change the color of the sparkline in my table. I don't have the option to change it using the dropdown menu under "Column-specific formatting" so I'm trying to do it through he Code option. Here is the current code I have:  { "type": "splunk.table", "dataSources": { "primary": "ds_xNY7uyLU" }, "title": "Top Notable Sources", "options": { "fontSize": "extraSmall", "columnFormat": { "sparkline": { "data": "> table | seriesByName(\"sparkline\") | formatByType(sparklineColumnFormatEditorConfig)", "sparklineColors": "> table | seriesByName(\"sparkline\") | matchValue(sparklineColorsEditorConfig)" } } }, "context": { "sparklineColorsEditorConfig": [ { "string": {}, "value": "#66aaf9" } ] }, "showProgressBar": false, "showLastUpdated": false }  
What do you mean by "patterns"? The answer will greatly depend on how you define it. Because depending on your needs, you can just flatten the module list to a string and do a summary which string ha... See more...
What do you mean by "patterns"? The answer will greatly depend on how you define it. Because depending on your needs, you can just flatten the module list to a string and do a summary which string happens most often or you can try some othe techniques ending with using MLTK app.
This is something you typically do in the search-head layer. It has nothing to do with HEC. And you're mixing different things here - EVAL-* entries belong directly in props.conf, not in transforms.... See more...
This is something you typically do in the search-head layer. It has nothing to do with HEC. And you're mixing different things here - EVAL-* entries belong directly in props.conf, not in transforms.conf stanza. And again - if you have a bigger environment than an all-in-one setup, this goes into the search-head tier.
Hello Splunkers, I’m looking for the best algorithm to search for events. with the below criteria. I have a lookup with only one field but multi-valued. About 10000 lines, for example, “vatsal, ja... See more...
Hello Splunkers, I’m looking for the best algorithm to search for events. with the below criteria. I have a lookup with only one field but multi-valued. About 10000 lines, for example, “vatsal, jagani” “10.0.0.1,“10.0.0.2” I want to search index=abc, for the last 2 hours (about 50 events) to see if there are at least two events (but can be more) that contain words from one set.   For example. event-1 - “hello, I’m Vatsal. event-2 - “hello, I’m jagani too.” here, two events have matching words from the same lookup field.   Another example, event-3 - “hi, vatsal” event-4 - “hello, vatsal” this also considers matching.   And I want to run this alert every hour.   Solution-1 - I could use the map command as below but I don't think that's very efficient.   | inputlookup words_lookup.py | eval or_field = <convert words to or list like "vatsal" OR "jagani"> | map max_count=1000000 "search index=abc $or_field$"     Solution-2 - I could write a Python script, but I'm not sure what algorithm to use.   I'm looking for a more efficient query or python algorithm to do this efficiently.
@PickleRick You right, after several workarounds finally figure out how extract list of modules, and solved all challenges.   Now i have list of modules like this (groupby them by id): Txn1 16:30... See more...
@PickleRick You right, after several workarounds finally figure out how extract list of modules, and solved all challenges.   Now i have list of modules like this (groupby them by id): Txn1 16:30:53:002 moduleA 16:30:54:002 moduleA 16:30:55:002 moduleB 16:30:56:002 moduleC 16:30:57:002 moduleD 16:30:58:002 moduleE 16:30:59:002 moduleF 16:30:60:002 moduleZ Txn2 16:30:54:002 moduleD 16:30:55:002 moduleE 16:30:56:002 moduleY   how can i use splunk to find patterns(flow) of modules? Find most patterns and rare patterns?
I have lookup table in splunk. I want check if ever been update in Splunk using output lookup command