All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm having some trouble coming up with the SPL for the following situation: I have some series of events with a timestamp. These events have a field extracted with a value of either "YES" or "NO". W... See more...
I'm having some trouble coming up with the SPL for the following situation: I have some series of events with a timestamp. These events have a field extracted with a value of either "YES" or "NO". When sorted by _time we end up with a list like the following: _time Result time1 YES time2 NO time3 NO time4 YES   I'd like to count the duration between the "NO" values and the next "YES" value. So in this case we'd have a duration equal to time4 - time2.    index=* sourcetype=*mantec* "Computer name" = raspberry_pi06 "Risk name" = WS.Reputation.1 | sort _time | eval removed = if('Actual action' == "Quarantined", "YES", "NO") | streamstats reset_before="("removed==\"YES\"")" last(_time) as lastTime first(_time) as firstTime count BY removed | eval duration = round((lastTime - firstTime)/60,0) | table removed duration count _time     I've tried to lean on streamstats but the result is resetting the count at the last "NO" and doesn't count the time of the next "YES". We end up with a duration equal to time3 - time2. Also in the case of a single "NO" followed by a "YES" we get a duration of 0 which is also incorrect. I feel like I'm missing something extremely obvious.
Hi Folks, I am trying to figure out how to compare a single field based off another field called timestamp. I pull in data into Splunk via a JSON file that looks like the following: {"table": "Rou... See more...
Hi Folks, I am trying to figure out how to compare a single field based off another field called timestamp. I pull in data into Splunk via a JSON file that looks like the following: {"table": "Route", "timestamp": "2023-11-07T12:25:43.208903", "dst": "10.240.0.0/30"} {"table": "Route", "timestamp": "2023-11-07T12:25:43.208903", "dst": "10.241.0.0/30"} {"table": "Route", "timestamp": "2023-11-07T12:25:43.208903", "dst": "10.242.0.0/30"} {"table": "Route", "timestamp": "2023-11-10T13:12:17.529455", "dst": "10.240.0.0/30"} {"table": "Route", "timestamp": "2023-11-10T13:12:17.529455", "dst": "10.241.0.0/31"} {"table": "Route", "timestamp": "2023-11-10T13:12:17.529455", "dst": "10.245.0.0/30"} There will be tens or hundreds of unique dst values, all with the same timestamp value. What I'd like to be able to do is compare all dst values based off the timestamp value and compare that against a different set of dst values based off a different timestamp value. So far, I've been able to do an appendcols + simple eval function to compare stats values from one timestamp to another: index=<index> host=<host> sourcetype=_json timestamp=2023-11-07T12:25:43.208903 | stats values(dst) as old_prefix | appendcols [searchindex=<index> host=<host> sourcetype=_json timestamp=2023-11-10T13:12:17.529455 | stats values(dst) as new_prefix] | eval result=if(old_prefix=new_prefix, "pass","fail") | table old_prefix new_prefix result  And these are the results I get: old_prefix new_prefix result 10.240.0.0/30 10.241.0.0/30 10.242.0.0/30 10.240.0.0/30 10.241.0.0/31 10.245.0.0/30 fail   But what I'd really want to see is something along the lines of this: old_prefix new_prefix result present_in_old_table present_in_new_table 10.240.0.0/30 10.240.0.0/30 pass     10.241.0.0/30   fail 10.241.0.0/30     10.241.0.0/31 fail   10.241.0.0/31 10.242.0.0/30   fail 10.242.0.0/30     10.245.0.0/30 fail    10.245.0.0/30   Or this: old_prefix new_prefix result present_in_old_table present_in_new_table 10.240.0.0/30 10.241.0.0/30 10.242.0.0/30 10.240.0.0/30 10.241.0.0/31 10.245.0.0/30 fail 10.241.0.0/30 10.242.0.0/30 10.241.0.0/31 10.245.0.0/30   Is this something that could be reasonably done inside splunk? Please let me know if you have any further questions from me.
Hi @wkk , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Please identify the user and domain fields in each event and I'll try to help you extract them.
Start with a search to return the events you are interested in. Since you didn't provide any details of what events you have, nor what you want in your dashboard, I am not sure how much more help can... See more...
Start with a search to return the events you are interested in. Since you didn't provide any details of what events you have, nor what you want in your dashboard, I am not sure how much more help can be given.
Hi i am trying to build a dashboard and I require a query to execute below some searches below:  1. REPORT FALSE POSITIVE PER TOTAL  2. REPORT MONTHLY SPLUNK ALERT HIGH - MEDIUM - LOW Can anyon... See more...
Hi i am trying to build a dashboard and I require a query to execute below some searches below:  1. REPORT FALSE POSITIVE PER TOTAL  2. REPORT MONTHLY SPLUNK ALERT HIGH - MEDIUM - LOW Can anyone help me in building the same?
I slightly changed the query, as I didn't want to use search. Query ends up with the same results. index=your_index | stats values(SUBMITTED_FROM) AS SUBMITTED_FROM values(STAGE) AS STAGE ... See more...
I slightly changed the query, as I didn't want to use search. Query ends up with the same results. index=your_index | stats values(SUBMITTED_FROM) AS SUBMITTED_FROM values(STAGE) AS STAGE BY SESSION_ID | where SUBMITTED_FROM=startPage STAGE=submit | stats count BY SESSION_ID  
@gcusello thank you that solved my case 
An instance name was incorrect. Check in slm web -> setting -> console monitoring -> setting-> general setup or #opt/splunk/etc/system/local/input.conf in your search head. I've changed the na... See more...
An instance name was incorrect. Check in slm web -> setting -> console monitoring -> setting-> general setup or #opt/splunk/etc/system/local/input.conf in your search head. I've changed the name of an instance and I'm doing #splunk resync shcluster-replicated-config on the search head to which you have changed the name. The error has disappeared for the moment. I'm currently monitoring the situation to see if the problem returns.
This looks like you have multiple events when they should be one? If so, do all the events that should be together have the same timestamp (_time)? Ideally, you should fix the ingest so that your e... See more...
This looks like you have multiple events when they should be one? If so, do all the events that should be together have the same timestamp (_time)? Ideally, you should fix the ingest so that your events are broken up correctly, i.e. not just by new lines, but by new lines followed by the starting pattern of the event. Either way, please can you share some sample raw events in a code block to preserve formatting.
...
Hi @AMAN0113  I would consider not migration the content pack but rater do a fresh install in Splunk Cloud. Is the reason that you want to migrate that you have made changes to the content pack? If... See more...
Hi @AMAN0113  I would consider not migration the content pack but rater do a fresh install in Splunk Cloud. Is the reason that you want to migrate that you have made changes to the content pack? If so try to identify the components needed for your solution to work, and consider migration them with a ITSI backup in combination with a private app holding all your custom *.conf configurations. Note! This can be a bit picky and you will need to identify all lookup / kv-stores / macros etc that will need to be migrated and have them available before restoring the backup. And of course Cloud and on prem-need to be on the same version.  Do not restore a full backup to Splunk cloud or any other environment. Full backups contains entities, services, episodes and stuff that should be generated by source data.   /Seb
Hi @Krutika_Agrawal Check the searches that are used in your dashboard. If you have a classic dashboard a small magnifying glass pups up when you are hovering the mouse pointer over a penel.  I am ... See more...
Hi @Krutika_Agrawal Check the searches that are used in your dashboard. If you have a classic dashboard a small magnifying glass pups up when you are hovering the mouse pointer over a penel.  I am expecting that you are searching either the itsi_summary index or the summary metrics index. The KPI name should update for the affected services from the point forward from the change.  To enable a new KPI name to historical data you might want to rewrite the search to correlate the KPI name to the Service KPI name by KPI id from a REST call. See Service KPI. /Seb
Hi, I need some help in creating a table from the below json events. Can someone please help me on that? The table columns be like 'Name' and 'Count' and Name should hold "cruice", "crpice" etc. an... See more...
Hi, I need some help in creating a table from the below json events. Can someone please help me on that? The table columns be like 'Name' and 'Count' and Name should hold "cruice", "crpice" etc. and Count should have the corresponding values. Any help would be appreciated. Thanks   11/7/23 9:04:23.616 PM   "Year": { host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "Top30RequesterInOneYear": { host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "cruice": 2289449, host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "crpice": 1465846, host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "zathena": 1017289, host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "qrecon": 864252, host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app                                                                    
Hi Rick, I try to do a dropdown for the status to filter for "still open", "closed" etc.  index=nessus Risk=Critical | eval state=if(_time<now()-7*85400,"OLD","NEW") | eval status=case(state="... See more...
Hi Rick, I try to do a dropdown for the status to filter for "still open", "closed" etc.  index=nessus Risk=Critical | eval state=if(_time<now()-7*85400,"OLD","NEW") | eval status=case(state="OLD" and state="NEW","still open",state="OLD","closed",state="NEW","Yummy, a fresh one!") and added |stats count by status I get the Information for the 3 status outputs in the normal search. I added this in the dropdown and can choose between the 3 but when i choose one of them i get no result found back. All with * is going. i tried a static options and thats the same problem. i cant filter for the 3 status outputs. Do you know what i am doing wrong? Thanks
hi all, i still failed to decrypt the epo logs. this is my config. [tcp://6514] connection_host = ip host = DCHQ-SIMSL-01 source = 10.220.34.23:6514 sourcetype = mcafee:epo:syslog index = mcafee ... See more...
hi all, i still failed to decrypt the epo logs. this is my config. [tcp://6514] connection_host = ip host = DCHQ-SIMSL-01 source = 10.220.34.23:6514 sourcetype = mcafee:epo:syslog index = mcafee [SSL] serverCert=/splunk/cert/splunk-epo-remote.pem requireClientCert=false cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256   any ideas? huhh
The eval examples I provided yesterday are for SPL queries.  They can be modified for props.conf files, however.
i dont know why splunk does not distribute clear instructions or tools to install and configure linux properly. redhat 9.x does not have init.d so you need to set boot-start with managed =1, but the... See more...
i dont know why splunk does not distribute clear instructions or tools to install and configure linux properly. redhat 9.x does not have init.d so you need to set boot-start with managed =1, but the service even if installed needs also systemctl ENABLE SplunkForwarder.service. In redhat 8 this is not the case.   the latest forwarder 9.1.1 also wont setup properly if you don't use user-seed.conf    I came out with this which does it job somehow, would be nice if someone would add his ideas to make it better.   (im running splunk as root for testing perpouses)         #!/bin/bash SPLUNK_FILE="splunkforwarder-9.1.1-64e843ea36b1.x86_64.rpm" rpm -ivh splunkforwarder-9.1.1-64e843ea36b1.x86_64.rpm ##change permission to root chown -R root:root /opt/splunkforwarder ##create user-seed.conf file that Splunk accepts to set admin credentials without user interaction sudo touch /opt/splunkforwarder/etc/system/local/user-seed.conf ##pass Splunk admin credentials into file sudo cat <<EOF > /opt/splunkforwarder/etc/system/local/user-seed.conf [user_info] USERNAME = admin PASSWORD = changeme EOF ##configure splunk /opt/splunkforwarder/bin/splunk set deploy-poll 192.168.68.129:8089 --accept-license --answer-yes --auto-ports --no-prompt /opt/splunkforwarder/bin/splunk enable boot-start -systemd-managed 0 /opt/splunkforwarder/bin/splunk start --no-prompt --answer-yes ##configure splunk Redhat 9.x #/opt/splunkforwarder/bin/splunk set deploy-poll 192.168.68.129:8089 --accept-license --answer-yes --auto-ports --no-prompt #/opt/splunkforwarder/bin/splunk enable boot-start -systemd-managed 1 #systemctl enable SplunkForwarder.service #systemctl start SplunkForwarder.service      
Hey @jwilczek I'm afraid not. Have you tried to run a later version of Splunk, I suspect you won't run into the problem there.
Hi @richgalloway , This eval group and eval user stanza have to be in the transforms.conf right ? thanks