All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Super old topic, but shocking that it seems Splunk hasn't brought this functionality into the product. Would you be open to sharing the modifications you made to incident_review.js?   Thank you
Thanks @phanTom. If anyone else comes across this in a search, I created a decision block which checks for container tags: If "tag1": go to the End block If "tag2": Continue to the next block ... See more...
Thanks @phanTom. If anyone else comes across this in a search, I created a decision block which checks for container tags: If "tag1": go to the End block If "tag2": Continue to the next block Next block applies the tag "tag1" to the container. Final block removes the tag "tag2" from the container. This design, for better or worse, allows me to run a playbook "on demand" via a Workbook or manual action on the case management side, while keeping automatic capabilities (apply label "tag2" to the container and run) if I decide to use it as say, a child playbook.
Thanks! This looks to be returning the desired info and format. Though I noticed some Policies were missing counts for certain results. The number of different values possible for 'displayName' is sh... See more...
Thanks! This looks to be returning the desired info and format. Though I noticed some Policies were missing counts for certain results. The number of different values possible for 'displayName' is showing less than is actually present in the event log. I think this may be an issue with Splunk itself and not the query though.  Would you happen to know if it's possible for the number of values to have a max or limit in Splunk? 
Frustration is the least emotional state I wanted to achieve here. Apologies! I still believe it was not me confusing people, I just wanted help with simply being able to compare two datasets and pr... See more...
Frustration is the least emotional state I wanted to achieve here. Apologies! I still believe it was not me confusing people, I just wanted help with simply being able to compare two datasets and printing out the only hosts seen in one of the data sources. if that sounds confusing then again apologies. I have been a member for some time now and always admired all the help given here. I have tested all solutions provided so far, and seen no results (it might be my fault). The only solution that provided me with the results I want was the following for all to use. I agree (do not have great knowledge around set command as of now) set might not be as efficient as other commands bet here is what worked for me. | set diff [| tstats count where source_1 by host | table host] [| tstats count where source_2 by host | table host] That SPL provides a list of all of the hosts not seen in source_2 At the end of the day it is important for people to get some working examples. They are testing and either working or not working. Silence is gold time to time and no one wants frustration waves. I was not meant to do any harm. Been in the industry for a long time enough to realize we all different and have different emotions. Please, by all means if you can create something that would prove me wrong or anything better than set command for the community to use. Thank you!
Hi @aab1, The error message shows there is an installation error, python is complaining about a missing module. Please check your installation document.  At this stage, it does not seem related to a... See more...
Hi @aab1, The error message shows there is an installation error, python is complaining about a missing module. Please check your installation document.  At this stage, it does not seem related to a certificate or firewall issue.
I am using the Sideview App trying to monitor usage by users.  There is a Pain field in the User Activity report.  Does anyone know what this Pain field is trying to show?
| tstats count where index=<your-index-here> earliest=-3d@d latest=now() by _time span=15m | eval date_wday=strftime(_time,"%A"), date_hourmin=strftime(_time,"%H:%M") | search date_wday!=Saturday ... See more...
| tstats count where index=<your-index-here> earliest=-3d@d latest=now() by _time span=15m | eval date_wday=strftime(_time,"%A"), date_hourmin=strftime(_time,"%H:%M") | search date_wday!=Saturday date_wday!=Sunday | eval current_weekday=strftime(now(),"%A") | eval previous_working_day=case(match(current_weekday,"Monday"),"Friday",match(current_weekday,"Tuesday"),"Monday",match(current_weekday,"Wednesday"),"Tuesday",match(current_weekday,"Thursday"),"Wednesday",match(current_weekday,"Friday"),"Thursday") | table _time count date_wday date_hourmin current_weekday previous_working_day | where date_wday=current_weekday OR date_wday=previous_working_day | chart sum(count) as count by date_hourmin, date_wday Ok that is the closest I could get to what you originally tried.  However, there are some flaws with this solution you may want to consider.  Specifically partial time bins can not be filtered out without using the time chart command.  So it could look like the count for the most recent time span has dangerously dropped when in reality you only have 2 or 3 minutes of the 15 minute window to measure. Working with the timewrap command is more correct way to do this as you can leverage timechart which allows you to disable partial windows.  You will find though that filtering out week-ends and the -3d@d makes for odd visualizations. index=<your-index-here> date_wday!=saturday date_wday!=sunday earliest=-3d@d latest=+1d@d | timechart span=15m partial=f count | timewrap 1day align=end Splunk time extracts date_* fields for you already.  The +1d@d is only important if you want your graph to go midnight to midnight, replace with now() if you are ok with the visualization start and end moving as the day progresses.
Hi @Derson , I don't call this behavior as a bug. eval command works for strings and numbers. For example, if the values are strings it concatenates if secondUID values are numbers it makes a calcul... See more...
Hi @Derson , I don't call this behavior as a bug. eval command works for strings and numbers. For example, if the values are strings it concatenates if secondUID values are numbers it makes a calculation. I think some of your secondUID values are being processed as numbers, that is why lookup cannot match.  Your solution is wise and assures that the values are processed as strings.
Hi @pranay03, By default, the file_input receiver doesn’t read logs from a file that is not actively being written to because start_at defaults to end. This setting will be ignored if previously re... See more...
Hi @pranay03, By default, the file_input receiver doesn’t read logs from a file that is not actively being written to because start_at defaults to end. This setting will be ignored if previously read file offsets are retrieved from a persistence mechanism.  So this behavior will not be a problem on normal running, only on the first installation otel will start reading new files.  If you want to read old files too, you can configure start_at parameter as beginning Here is the related document; https://docs.splunk.com/observability/en/gdi/opentelemetry/components/filelog-receiver.html#settings   
Use the strptime and strftime functions to convert time formats. | eval timeField=strftime(strptime(timeField,"%H:%M:%S.%6Q"), "%H:%M:%S") You also can use string manipulation to cut off the last 7... See more...
Use the strptime and strftime functions to convert time formats. | eval timeField=strftime(strptime(timeField,"%H:%M:%S.%6Q"), "%H:%M:%S") You also can use string manipulation to cut off the last 7 characters.  
What is it you expect the Deployment Server to do? A DS has no use for props.conf, transforms.conf, or inputs.conf.  It uses outputs.conf to send its logs to the indexer(s).
I was facing the same issue, I used the following condition and is working fine  search result_of_search > 10
I ended up just adding the hex code of my preferred colors to the options section of the visualization:   "viz_AbCd12if": {             "type": "splunk.table",             "dataSources": {   ... See more...
I ended up just adding the hex code of my preferred colors to the options section of the visualization:   "viz_AbCd12if": {             "type": "splunk.table",             "dataSources": {                 "primary": "ds_xNY7uyLU"             },             "title": "Title of Table",             "options": {                 "columnFormat": {                     "sparkline": {                         "data": "> table | seriesByName(\"sparkline\") | formatByType(sparklineColumnFormatEditorConfig)",                         "sparklineColors": [                             "#66aaf9",                             "#66aaf9",                             "#66aaf9",                             "#66aaf9",                             "#66aaf9",                             "#66aaf9",                             "#66aaf9",                             "#66aaf9",                             "#66aaf9",                             "#66aaf9"
Hi,  UF etc/apps/remo/local  placed the inputs,outputs,props and tranforms configuration files  and search the data in indexer+SearchHead  servers , Events  are received Successfully. [monitor://E... See more...
Hi,  UF etc/apps/remo/local  placed the inputs,outputs,props and tranforms configuration files  and search the data in indexer+SearchHead  servers , Events  are received Successfully. [monitor://E:\KS Application GBR (GR)\sbxLogs\] index = ks_dev sourcetype = ks_logs crcSalt = <SOURCE>   [tcpout:bprserver] server = 1.2.3.4:9997 useACK = true [ks_logs] TRANSFORMS--null = EXCLUDE_INFO_WARN_events [EXCLUDE_INFO_WARN_events] REGEX = ^[\d|-]*\s[\d|:|,]*\s(INFO|WARN).*$ DEST_KEY = queue FORMAT = nullQueue   Same configuration updated in the deployment server etc\deploymentapps\ksapp\local [monitor://E:\KS Application GBR (GR)\sbxLogs\] index = ks_dev sourcetype = ks_logs crcSalt = <SOURCE> [tcpout:bprserver] server = 1.2 3.4:9997 useACK = true [ks_logs] TRANSFORMS--null = EXCLUDE_INFO_WARN_events [EXCLUDE_INFO_WARN_events] REGEX = ^[\d|-]*\s[\d|:|,]*\s(INFO|WARN).*$ DEST_KEY = queue FORMAT = nullQueue   Events are receiving  the SH+indexer server Note: in my account there is no HeavyForwarder instance. please help how to do configuration in deployment server.                
HI  Can someone please let me know how to convert the time from the format hh:mm:ss.6Q  to hh:mm:ss ??     
Hi @nnkreddy, if you're confident that you received an event in the last 24 hours, you could run something like this: index = index1 earliest=-24h latest=now source IN (dev-*api.log) ("testapi" AN... See more...
Hi @nnkreddy, if you're confident that you received an event in the last 24 hours, you could run something like this: index = index1 earliest=-24h latest=now source IN (dev-*api.log) ("testapi" AND "HEARTBEAT") | stats latest(_time) AS latest BY APIName, JVM | where latest>now()-300 If you're not sure that you received at least one event in the last 24 hours, you have to create a lookup  (called e.g. perimeter.csv) containing all the APIName and JVM to monitor, then you can run something like this: index = index1 earliest=-m5 latest=now source IN (dev-*api.log) ("testapi" AND "HEARTBEAT") | stats count BY APIName, JVM | append [ | inputlookup perimeter.cv | eval count=0 | fields APIName JVM count ] | stats sum(count) AS total BY APIName, JVM | where total=0 The second search is less heavy and long to execute and gives more control, but requires to manage the lookup. Ciao. Giuseppe
Splunk will check the first 256 (configurable) bytes of a monitored file to see if the entire file has been changed rather than new lines added to the end.  If it sees the beginning of the file is di... See more...
Splunk will check the first 256 (configurable) bytes of a monitored file to see if the entire file has been changed rather than new lines added to the end.  If it sees the beginning of the file is different then it assumes the entire file is new and re-ingests it. I see no workaround for this.  Splunk has no way to know how many older lines were trimmed and so has to treat the whole file as new.
Hello, I've a simple requirement but new to Splunk so facing some challenges and hoping for some luck! My application writes HEARTBEAT messages every 2 min to log files to multiple sources. I'm jus... See more...
Hello, I've a simple requirement but new to Splunk so facing some challenges and hoping for some luck! My application writes HEARTBEAT messages every 2 min to log files to multiple sources. I'm just trying to create an alert and send email if heartbeat messages aren't written in last 5 min.  It may look simple but I also need to know which sources doesn't have heartbeat messages.  I've tried with below query which works but sometimes giving me incorrect results. So, looking for an better and simple solution.   index = index1 earliest=-5m latest=now source IN (dev-*api.log) ("testapi" AND "HEARTBEAT") | fields source | append [ search index = index1 earliest=-2w@w0 latest=now source IN (dev-*api.log) ("testapi" AND "HEARTBEAT") | stats dc(source) as source_list by source | fields source ] | rex field=_raw "HEARTBEAT for (?<APIName>.*).jar (?<Version>.*)" | stats count as #heartbeats, latest(Version) as Versions by APIName, JVM | eval Status=case(('#heartbeats' <= 1 OR isnull('#heartbeats')), "NOT RUNNING", '#heartbeats' > 1, "RUNNING") | table APIName, Versions, Status   Appreciate the help! Thanks.
Any app containing inputs.conf should have the "Restart splunkd" option enabled.  Do that in the Forwarder Management section of the Deployment Server.  That will tell the UF to restart itself each t... See more...
Any app containing inputs.conf should have the "Restart splunkd" option enabled.  Do that in the Forwarder Management section of the Deployment Server.  That will tell the UF to restart itself each time it gets an updated copy of the app.
Hello there,  I was having the same issue, and it turned out to be a problem with the installation. So, I just did a yum remove splunk* and removed the /opt/splunkforwarder home directory completely... See more...
Hello there,  I was having the same issue, and it turned out to be a problem with the installation. So, I just did a yum remove splunk* and removed the /opt/splunkforwarder home directory completely.  After, uninstalling and removing the splunk home directory, and I started spunk just fine and I was able to run the add monitor command without any issues.  I'm running RHEL 8.x and issuing all of these commands via the Linux CLI. Splunk version is 9.1.2. I hope this helps.  Respectfully.  Guillermo  Washington, DC