All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @nill, good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points a... See more...
Hi @nill, good for you, see next time! let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
You mention the tolerance being influenced by the spike that has occured itself. Are you fitting your algorithm on data which includes the intended outlier? Using only data you consider normal to fit... See more...
You mention the tolerance being influenced by the spike that has occured itself. Are you fitting your algorithm on data which includes the intended outlier? Using only data you consider normal to fit the function would likely solve your issue here. The same goes for continuous re-training via partial_fit; use this only after all new data has been predicted using  the old model state. If this is not the issue here, some more information regarding what MLTK algorithm you are planning to use, your current parameter setup and what data you are using for your train/test split might give a better idea as to the root cause of your issue.
Hi @gcusello  Thank you so much for the assistance. Greatly appreciated. Regarfs, Nill
First of all, I suspect that by "continuous increase" you actually mean monotonous increase.  Are you thinking of delta instead?  What is the output format you need in the report?  If you want all th... See more...
First of all, I suspect that by "continuous increase" you actually mean monotonous increase.  Are you thinking of delta instead?  What is the output format you need in the report?  If you want all the event details, you can then use eventstats to determine whether there was any decrement. | delta perc_change as delta | eventstats values(delta) as change | where NOT changes < 0 | table _time GB delta perc_change If you do not need every event, you may construct some stats command that is more efficient.
question, the alert should be triggered with both set-1 and set-2 because set-1 have one unchanged event_id whereas set-2 have three unchanged event_id's. In that case,   | stats list(_time) a... See more...
question, the alert should be triggered with both set-1 and set-2 because set-1 have one unchanged event_id whereas set-2 have three unchanged event_id's. In that case,   | stats list(_time) as _time by event_id event_name task_id | where mvcount(_time) > 1 | fieldformat _time = strftime(_time, "%F %H:%M:%S.%3Q")   should suffice.  The emulated dataset 1 gives event_id event_name task_id _time 1274856 pending-transfer 1 2022-09-04 21:40:39.000,2022-09-04 22:10:39.000 Emulated dataset 2 gives event_id event_name task_id _time 1274748 pending-transfer 2 2022-09-04 22:05:39.000,2022-09-04 21:35:39.000 1274856 pending-transfer 1 2022-09-04 22:10:39.000,2022-09-04 21:40:39.000 1274902 pending-transfer 3 2022-09-04 22:00:39.000,2022-09-04 21:30:39.000 Can you show a dataset that the above does not meet the requirement? (Just modify the emulations so we are on the same page.)
Hi @grotti, if you haven't too many comments for each row, you could use: index=notable status_label=Closed | stats values(comment) AS comment BY rule_title | sort 10 -count Ciao. Giuseppe
Hi @Yashvik, if you're not interested to the value for each sourcetype, but only to know which sourcetypes are in an index, you cam modify the above search in this way: index=_internal source=*lice... See more...
Hi @Yashvik, if you're not interested to the value for each sourcetype, but only to know which sourcetypes are in an index, you cam modify the above search in this way: index=_internal source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | bin _time span=1d | stats sum(b) as b by _time, pool, s, st, h, idx | bin span=1d _time | stats values( st) AS sourcetype sum(b) AS volumeB by _time idx | rename idx AS index | eval volumeB=round(volumeB/1024/1024/1024,2) | sort 20 -volumeB Ciao. Giuseppe
Hi, I'm trying to create a new account on Splunk On-Call with Global Admin permissions and share it with a group of people in the higher ups. I would like to know if there is any url (like the splun... See more...
Hi, I'm trying to create a new account on Splunk On-Call with Global Admin permissions and share it with a group of people in the higher ups. I would like to know if there is any url (like the splunk one) to bypass SSO and login with basic credentials. The reason for this is since there will be multiple people using the same account, we don't want SSO/MFA to be set up.  
Yes, our issue with the splunk-winevtlog.exe process crashing on capturing data from the Security Log was resolved. I haven't looked for the bug you mentioned
Hi @Anantha123 ... 1). on DMC, you can get the index size details.  2). Using dbinspect command you can get the index size | dbinspect index=myindex | eval GB=sizeOnDiskMB/1024 | stat sum(GB) 3)... See more...
Hi @Anantha123 ... 1). on DMC, you can get the index size details.  2). Using dbinspect command you can get the index size | dbinspect index=myindex | eval GB=sizeOnDiskMB/1024 | stat sum(GB) 3). rest command  | rest /services/data/indexes | stats values(currentDBSizeMB) by title 4) eventcount command:   | eventcount summarize=false index=* report_size=true | eval GB=(size_bytes/1024)/1024/1024 | stats sum(GB) by index, server    5) collect command: (not sure of this... pls test this one)   |collect index=myindex     
That worked perfectly, thank you!
what are the different ways to calculate size of one index ? looking for solutions other than "licence_usage.log". Appreciate your help. Thank you.
Please help me on how I can check if the field value is continuously increasing for 3 hours.  tried below query but does not help .  Perc_change values are extracted from logs , whereas prev_change... See more...
Please help me on how I can check if the field value is continuously increasing for 3 hours.  tried below query but does not help .  Perc_change values are extracted from logs , whereas prev_change and growing are calculated form perc_change values. | streamstats current=f window=1 latest(perc_change) as prev_value | fillnull value=0 | eval growing = if(perc_change< prev_value,1,0) | table _time GB change perc_change prev_value growing getting values as  perc_change  prev_value  growing 60                          0                    0 35                         60                  1 33                         35                    1 150                       33                   0  expectations :  perc_change  prev_value  growing 60                         35                  1 35                         33                  1 33                         150               0 150                       0                    0  I have to send a report if the perc_change values are continuously growing for 3 hours Appreciate your help . Thank you.  
By packing all the small SVG graphs into one big one (so only one SVG for the whole dashboard with 140 subitems), the load times have drastically reduced and are now again at 4-5 seconds (which is st... See more...
By packing all the small SVG graphs into one big one (so only one SVG for the whole dashboard with 140 subitems), the load times have drastically reduced and are now again at 4-5 seconds (which is still long, but seems to be normal with Splunk).  
Our splunk implementation is like a Splunk enterprise where the indexer is set up and several universal forwarder and in one of the server where universal forwarder is installed the JMS queue is set ... See more...
Our splunk implementation is like a Splunk enterprise where the indexer is set up and several universal forwarder and in one of the server where universal forwarder is installed the JMS queue is set up. From this JMS queue I need to read messages, how can I achieve this in splunk
Yes i already try to remove the blacklist even try the whitelist but the result is still same the event code 4662 not generated at all. When my team already remove the blacklist, we also try to enume... See more...
Yes i already try to remove the blacklist even try the whitelist but the result is still same the event code 4662 not generated at all. When my team already remove the blacklist, we also try to enumerate the active directory to see if the event generate but when we check on splunk the event still not showing up. Is there other settings or maybe the regex is wrong??
I am new to Splunk and getting below error seems like we started getting this error after yum install update. Any help or suggestion is really appreciated. [root@XXXXXX ~]# systemctl status Splun... See more...
I am new to Splunk and getting below error seems like we started getting this error after yum install update. Any help or suggestion is really appreciated. [root@XXXXXX ~]# systemctl status Splunkd.service ● Splunkd.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: loaded (/etc/systemd/system/Splunkd.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Sun 2023-09-17 13:46:55 MST; 3h 58min ago Process: 3466 ExecStartPost=/bin/bash -c chown -R splunk:splunk /sys/fs/cgroup/memory/system.slice/Splunkd.service (code=exited, status=0/SUCCESS) Process: 3464 ExecStartPost=/bin/bash -c chown -R splunk:splunk /sys/fs/cgroup/cpu/system.slice/Splunkd.service (code=exited, status=0/SUCCESS) Process: 3463 ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd (code=exited, status=1/FAILURE) Main PID: 3463 (code=exited, status=1/FAILURE) Sep 17 13:46:55 XXX.xx.xxx systemd[1]: Splunkd.service: Service RestartSec=100ms expired, scheduling restart. Sep 17 13:46:55 XXX.xx.xxx systemd[1]: Splunkd.service: Scheduled restart job, restart counter is at 5. Sep 17 13:46:55 XXX.xx.xxx systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. Sep 17 13:46:55 XXX.xx.xxx systemd[1]: Splunkd.service: Start request repeated too quickly. Sep 17 13:46:55 XXX.xx.xxx systemd[1]: Splunkd.service: Failed with result 'exit-code'. Sep 17 13:46:55 XXX.xx.xxx systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'.    [splunk@xxx.xx.xxx bin]$ ./splunk enable boot start /opt/splunk/bin/splunkd: symbol lookup error: /opt/splunk/bin/splunkd: undefined symbol: SSL_load_error_strings /opt/splunk/bin/splunkd: symbol lookup error: /opt/splunk/bin/splunkd: undefined symbol: SSL_load_error_strings /opt/splunk/bin/splunkd: symbol lookup error: /opt/splunk/bin/splunkd: undefined symbol: SSL_load_error_strings Did not find "disabled" setting of "kvstore" stanza in server bundle. /opt/splunk/bin/splunkd: symbol lookup error: /opt/splunk/bin/splunkd: undefined symbol: SSL_load_error_strings /opt/splunk/bin/splunkd: symbol lookup error: /opt/splunk/bin/splunkd: undefined symbol: SSL_load_error_strings splunkd: symbol lookup error: splunkd: undefined symbol: SSL_load_error_strings /opt/splunk/bin/splunkd: symbol lookup error: /opt/splunk/bin/splunkd: undefined symbol: SSL_load_error_strings "/opt/splunk/bin/splunkd" returned 127
There is probably an automatic lookup associated with the data you are searching and there is no lookup for that. Lookup for this pulse_connectsecure_action_lookup definition in the automatic lookups
You can use timechart/timewrap if you want to end up with a timechart dataset which you can plot overlaid on each other index=* handler=traffic (earliest=-1d@d latest=@d) OR (earliest=@d latest=now)... See more...
You can use timechart/timewrap if you want to end up with a timechart dataset which you can plot overlaid on each other index=* handler=traffic (earliest=-1d@d latest=@d) OR (earliest=@d latest=now) | timechart span=30m dc(dsid) as traffic | timewrap 1d | eval delta=traffic_latest_day - traffic_1day_before or you can 'move' the previous day values to 'today' for charting purposes, by just adding 86400 (seconds in a day) to yesterday's value (which is what timewrap effectively does). index=* handler=traffic (earliest=-1d@d latest=@d) OR (earliest=@d latest=now) | eval d=if(_time>=relative_time(now(), "@d"), "today", "yesterday") | eval _time=_time + if(d="yesterday", 86400, 0) | timechart fixedrange=f span=30m dc(dsid) by d | eval delta=today-yesterday  
Does the _time of the event equate to the time of the whenCreated field? If your event time is different to the whenCreated value, then if you want to find all accounts created in the past day, you ... See more...
Does the _time of the event equate to the time of the whenCreated field? If your event time is different to the whenCreated value, then if you want to find all accounts created in the past day, you need to know what the event times will be for those events - Splunk will always retrieve events from a given time window you specify. If you just search for -24h to now then if you have events where the whenCreated date is older than 24 hours ago then you need to parse that field and compare, i.e. ... | eval created=strptime(whenCreated, "%I:%M:%S %p, %a %m/%d/%y" | where created>=relative_time(now(), "-24h") As for your time format string, there are a 3 issues %H is a 24 hour clock and as you have AM/PM in your time string, you would use %I to parse a 12 hour format. %p is the notation for AM/PM  %Y is a 4 digit year - your whenCreated date is only 2 digits, in which case you should use %y See the docs here https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Commontimeformatvariables