All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Do i need to include the IP address?            
I have a working query that uses Transaction to find the Starting / Ending log event.  I am trying to make some changes but Transaction is not working as I expected. In my current working example I... See more...
I have a working query that uses Transaction to find the Starting / Ending log event.  I am trying to make some changes but Transaction is not working as I expected. In my current working example I am looking for a 'job name' and then the starting and ending log event. In my current code I am using one query: index=anIndex sourcetype=aSourcetype aJobName AND ("START of script" OR "COMPLETED OK"). This works fine when there are no issues but now if a job fails there will be multiple "START of script" and only one 'COMPLETED OK' event. So, I tried reworking my query to be as follows to only get the most recent of either the start / completed log event. index=anIndex sourcetype=aSourcetype aJobName AND "START of script" | head 1 | append [ index=anIndex sourcetype=aSourcetype aJobName AND "COMPLETED OK" | head 1 ] But when I get to the part of creating a transaction the transaction only has the Starting log event ? | rex "(?<event_name>(START of script)|(COMPLETED OK))" | eval event_name=CASE(event_name="START of script", "script_start", event_name="COMPLETED OK", "script_complete") | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | eval {event_name}_time=_time | rex field=_raw "Batch::(?<batchJobName>[^\s]*)" | transaction keeporphans=true host batchJobName startswith=(event_name="script_start") endswith=(event_name="script_complete")   Is the use of | append [...] the cause ? If append cannot be used for transaction what other way can I get the data Im looking for ?
Hi , How we can fix this issue in ES SH "Health Check: msg="A script exited abnormally with exit status: 1" input=".$SPLUNK_HOME/etc/apps/splunk-dashboard-studio/bin/save_image_and_icon_on_install... See more...
Hi , How we can fix this issue in ES SH "Health Check: msg="A script exited abnormally with exit status: 1" input=".$SPLUNK_HOME/etc/apps/splunk-dashboard-studio/bin/save_image_and_icon_on_install.py" stanza="default" Thanks..
@richgalloway , How we can modify for props.conf ? thanks
This topic is covered pretty well  via the props/transforms settings as such:   transforms.conf [mv_extract] REGEX = \*\*\sRABAX\:\s(?<ABAPRABAX>.*) MV_ADD = true REPEAT_MATCH = true  reference: ... See more...
This topic is covered pretty well  via the props/transforms settings as such:   transforms.conf [mv_extract] REGEX = \*\*\sRABAX\:\s(?<ABAPRABAX>.*) MV_ADD = true REPEAT_MATCH = true  reference: https://community.splunk.com/t5/Getting-Data-In/Multi-value-field-extraction-props-conf-transforms-conf/m-p/210426
I'm having some trouble coming up with the SPL for the following situation: I have some series of events with a timestamp. These events have a field extracted with a value of either "YES" or "NO". W... See more...
I'm having some trouble coming up with the SPL for the following situation: I have some series of events with a timestamp. These events have a field extracted with a value of either "YES" or "NO". When sorted by _time we end up with a list like the following: _time Result time1 YES time2 NO time3 NO time4 YES   I'd like to count the duration between the "NO" values and the next "YES" value. So in this case we'd have a duration equal to time4 - time2.    index=* sourcetype=*mantec* "Computer name" = raspberry_pi06 "Risk name" = WS.Reputation.1 | sort _time | eval removed = if('Actual action' == "Quarantined", "YES", "NO") | streamstats reset_before="("removed==\"YES\"")" last(_time) as lastTime first(_time) as firstTime count BY removed | eval duration = round((lastTime - firstTime)/60,0) | table removed duration count _time     I've tried to lean on streamstats but the result is resetting the count at the last "NO" and doesn't count the time of the next "YES". We end up with a duration equal to time3 - time2. Also in the case of a single "NO" followed by a "YES" we get a duration of 0 which is also incorrect. I feel like I'm missing something extremely obvious.
Hi Folks, I am trying to figure out how to compare a single field based off another field called timestamp. I pull in data into Splunk via a JSON file that looks like the following: {"table": "Rou... See more...
Hi Folks, I am trying to figure out how to compare a single field based off another field called timestamp. I pull in data into Splunk via a JSON file that looks like the following: {"table": "Route", "timestamp": "2023-11-07T12:25:43.208903", "dst": "10.240.0.0/30"} {"table": "Route", "timestamp": "2023-11-07T12:25:43.208903", "dst": "10.241.0.0/30"} {"table": "Route", "timestamp": "2023-11-07T12:25:43.208903", "dst": "10.242.0.0/30"} {"table": "Route", "timestamp": "2023-11-10T13:12:17.529455", "dst": "10.240.0.0/30"} {"table": "Route", "timestamp": "2023-11-10T13:12:17.529455", "dst": "10.241.0.0/31"} {"table": "Route", "timestamp": "2023-11-10T13:12:17.529455", "dst": "10.245.0.0/30"} There will be tens or hundreds of unique dst values, all with the same timestamp value. What I'd like to be able to do is compare all dst values based off the timestamp value and compare that against a different set of dst values based off a different timestamp value. So far, I've been able to do an appendcols + simple eval function to compare stats values from one timestamp to another: index=<index> host=<host> sourcetype=_json timestamp=2023-11-07T12:25:43.208903 | stats values(dst) as old_prefix | appendcols [searchindex=<index> host=<host> sourcetype=_json timestamp=2023-11-10T13:12:17.529455 | stats values(dst) as new_prefix] | eval result=if(old_prefix=new_prefix, "pass","fail") | table old_prefix new_prefix result  And these are the results I get: old_prefix new_prefix result 10.240.0.0/30 10.241.0.0/30 10.242.0.0/30 10.240.0.0/30 10.241.0.0/31 10.245.0.0/30 fail   But what I'd really want to see is something along the lines of this: old_prefix new_prefix result present_in_old_table present_in_new_table 10.240.0.0/30 10.240.0.0/30 pass     10.241.0.0/30   fail 10.241.0.0/30     10.241.0.0/31 fail   10.241.0.0/31 10.242.0.0/30   fail 10.242.0.0/30     10.245.0.0/30 fail    10.245.0.0/30   Or this: old_prefix new_prefix result present_in_old_table present_in_new_table 10.240.0.0/30 10.241.0.0/30 10.242.0.0/30 10.240.0.0/30 10.241.0.0/31 10.245.0.0/30 fail 10.241.0.0/30 10.242.0.0/30 10.241.0.0/31 10.245.0.0/30   Is this something that could be reasonably done inside splunk? Please let me know if you have any further questions from me.
Hi @wkk , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Please identify the user and domain fields in each event and I'll try to help you extract them.
Start with a search to return the events you are interested in. Since you didn't provide any details of what events you have, nor what you want in your dashboard, I am not sure how much more help can... See more...
Start with a search to return the events you are interested in. Since you didn't provide any details of what events you have, nor what you want in your dashboard, I am not sure how much more help can be given.
Hi i am trying to build a dashboard and I require a query to execute below some searches below:  1. REPORT FALSE POSITIVE PER TOTAL  2. REPORT MONTHLY SPLUNK ALERT HIGH - MEDIUM - LOW Can anyon... See more...
Hi i am trying to build a dashboard and I require a query to execute below some searches below:  1. REPORT FALSE POSITIVE PER TOTAL  2. REPORT MONTHLY SPLUNK ALERT HIGH - MEDIUM - LOW Can anyone help me in building the same?
I slightly changed the query, as I didn't want to use search. Query ends up with the same results. index=your_index | stats values(SUBMITTED_FROM) AS SUBMITTED_FROM values(STAGE) AS STAGE ... See more...
I slightly changed the query, as I didn't want to use search. Query ends up with the same results. index=your_index | stats values(SUBMITTED_FROM) AS SUBMITTED_FROM values(STAGE) AS STAGE BY SESSION_ID | where SUBMITTED_FROM=startPage STAGE=submit | stats count BY SESSION_ID  
@gcusello thank you that solved my case 
An instance name was incorrect. Check in slm web -> setting -> console monitoring -> setting-> general setup or #opt/splunk/etc/system/local/input.conf in your search head. I've changed the na... See more...
An instance name was incorrect. Check in slm web -> setting -> console monitoring -> setting-> general setup or #opt/splunk/etc/system/local/input.conf in your search head. I've changed the name of an instance and I'm doing #splunk resync shcluster-replicated-config on the search head to which you have changed the name. The error has disappeared for the moment. I'm currently monitoring the situation to see if the problem returns.
This looks like you have multiple events when they should be one? If so, do all the events that should be together have the same timestamp (_time)? Ideally, you should fix the ingest so that your e... See more...
This looks like you have multiple events when they should be one? If so, do all the events that should be together have the same timestamp (_time)? Ideally, you should fix the ingest so that your events are broken up correctly, i.e. not just by new lines, but by new lines followed by the starting pattern of the event. Either way, please can you share some sample raw events in a code block to preserve formatting.
...
Hi @AMAN0113  I would consider not migration the content pack but rater do a fresh install in Splunk Cloud. Is the reason that you want to migrate that you have made changes to the content pack? If... See more...
Hi @AMAN0113  I would consider not migration the content pack but rater do a fresh install in Splunk Cloud. Is the reason that you want to migrate that you have made changes to the content pack? If so try to identify the components needed for your solution to work, and consider migration them with a ITSI backup in combination with a private app holding all your custom *.conf configurations. Note! This can be a bit picky and you will need to identify all lookup / kv-stores / macros etc that will need to be migrated and have them available before restoring the backup. And of course Cloud and on prem-need to be on the same version.  Do not restore a full backup to Splunk cloud or any other environment. Full backups contains entities, services, episodes and stuff that should be generated by source data.   /Seb
Hi @Krutika_Agrawal Check the searches that are used in your dashboard. If you have a classic dashboard a small magnifying glass pups up when you are hovering the mouse pointer over a penel.  I am ... See more...
Hi @Krutika_Agrawal Check the searches that are used in your dashboard. If you have a classic dashboard a small magnifying glass pups up when you are hovering the mouse pointer over a penel.  I am expecting that you are searching either the itsi_summary index or the summary metrics index. The KPI name should update for the affected services from the point forward from the change.  To enable a new KPI name to historical data you might want to rewrite the search to correlate the KPI name to the Service KPI name by KPI id from a REST call. See Service KPI. /Seb
Hi, I need some help in creating a table from the below json events. Can someone please help me on that? The table columns be like 'Name' and 'Count' and Name should hold "cruice", "crpice" etc. an... See more...
Hi, I need some help in creating a table from the below json events. Can someone please help me on that? The table columns be like 'Name' and 'Count' and Name should hold "cruice", "crpice" etc. and Count should have the corresponding values. Any help would be appreciated. Thanks   11/7/23 9:04:23.616 PM   "Year": { host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "Top30RequesterInOneYear": { host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "cruice": 2289449, host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "crpice": 1465846, host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "zathena": 1017289, host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "qrecon": 864252, host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app                                                                    
Hi Rick, I try to do a dropdown for the status to filter for "still open", "closed" etc.  index=nessus Risk=Critical | eval state=if(_time<now()-7*85400,"OLD","NEW") | eval status=case(state="... See more...
Hi Rick, I try to do a dropdown for the status to filter for "still open", "closed" etc.  index=nessus Risk=Critical | eval state=if(_time<now()-7*85400,"OLD","NEW") | eval status=case(state="OLD" and state="NEW","still open",state="OLD","closed",state="NEW","Yummy, a fresh one!") and added |stats count by status I get the Information for the 3 status outputs in the normal search. I added this in the dropdown and can choose between the 3 but when i choose one of them i get no result found back. All with * is going. i tried a static options and thats the same problem. i cant filter for the 3 status outputs. Do you know what i am doing wrong? Thanks