All Posts

Top

All Posts

You could try something like this <your index search> | eventstats count by Version | eventstats max(count) as top | where count=top
The timechart command accepts only one field name in the by clause.  Anything else will result in an error.
Hi All, I have an output from a lookup table in splunk where the team work timings field is coming as:: TeamWorkTimings 09:00:00-18:00:00 I want the output to be separated in two fields, like: T... See more...
Hi All, I have an output from a lookup table in splunk where the team work timings field is coming as:: TeamWorkTimings 09:00:00-18:00:00 I want the output to be separated in two fields, like: TeamStart   TeamEnd 09:00:00       18:00:00   Please help me in getting this output in splunk
Hello, I've below dataset from Splunk search. Name percentage A 71% B 90% C 44% D 88% E 78%   All I need to change the percentage field values color as per below rule i... See more...
Hello, I've below dataset from Splunk search. Name percentage A 71% B 90% C 44% D 88% E 78%   All I need to change the percentage field values color as per below rule in the email alert. My requirement to achieve this by updating the sendemail.py. 95+ green, 80-94 amber, <80 = red @tscroggins @ITWhisperer @yuanliu @bowesmana 
Hi @KendallW ,   I reread you post and realized I didn't answer the Identity question.  I do not get an error when saving the identity.
Try something like this | eval {Function}_TIME=_time | stats values(Date_of_reception) as Date_of_reception values(*_TIME) as *_TIME by JOBNAME | eval Diff=ENDED_TIME-STARTED_TIME | fieldformat STAR... See more...
Try something like this | eval {Function}_TIME=_time | stats values(Date_of_reception) as Date_of_reception values(*_TIME) as *_TIME by JOBNAME | eval Diff=ENDED_TIME-STARTED_TIME | fieldformat STARTED_TIME=strftime(STARTED_TIME,"%H:%M:%S") | fieldformat ENDED_TIME=strftime(ENDED_TIME,"%H:%M:%S") | fieldformat PURGED_TIME=strftime(PURGED_TIME,"%H:%M:%S") | fieldformat Diff=tostring(Diff,"duration")
Please make sure you have entered correct username and password for an admin user on the remote search peer.
Hi, I am getting Axios 500 errors after installing the Salesforce Streaming API add-on app on my Splunk Cloud Trial (Classic). I can't configure the Inputs or Configuration tabs at all. I have a feel... See more...
Hi, I am getting Axios 500 errors after installing the Salesforce Streaming API add-on app on my Splunk Cloud Trial (Classic). I can't configure the Inputs or Configuration tabs at all. I have a feeling that this add-on isn't properly supported in the Trial Cloud instances. Has anyone had any luck getting this to work on Cloud Classic? Am I missing an additional configuration or app that I need to install to get this to work? Any help would be greatly appreciated. P.S.: I was able to get the Salesforce add-on to install, configure, and connect to my Sandbox just fine. It is this streaming api add-on that seems to be an issue. 
This give me the result in the below format.      is it possible to have 1 more field in the table and sort the columns in the below order: | JOBNAME | Date_of_reception | STARTED_TIME | EN... See more...
This give me the result in the below format.      is it possible to have 1 more field in the table and sort the columns in the below order: | JOBNAME | Date_of_reception | STARTED_TIME | ENDED_TIME | PURGED_TIME| Diff Between STARTED_TIME and ENDED_TIME |  | $VVF119P | 2024/04/17 | 02:12:37 | 02:12:46 | 02:12:50 | 00:00:09| 
I am trying to create a report that pulls a version, but only shows one instance and then list all the hosts within that version  
Take a look at this solution:   https://community.splunk.com/t5/Splunk-Search/Convert-Hexadecimal-IP-v4-addresses-to-decimal/td-p/40938 You could use:  (?<d1>\d{1,3})\.(?<d2>\d{1,3})\.(?<d3>\d{... See more...
Take a look at this solution:   https://community.splunk.com/t5/Splunk-Search/Convert-Hexadecimal-IP-v4-addresses-to-decimal/td-p/40938 You could use:  (?<d1>\d{1,3})\.(?<d2>\d{1,3})\.(?<d3>\d{1,3})\.(?<d4>\d{1,3}) for your particular example as the rex conversion. | makeresults count=1 | eval src_ip = "192.168.1.1" | streamstats values(src_ip) as src_ip by _time | rex field=src_ip "(?<d1>\d{1,3})\.(?<d2>\d{1,3})\.(?<d3>\d{1,3})\.(?<d4>\d{1,3})" | eval dec_src_ip = 'd1'*16777216+'d2'*65536+'d3'*256+'d4'+0 There is also an app that provides you a command to do the conversion:   https://splunkbase.splunk.com/app/512  
Try something like this index=events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console (TERM(VVF119P)) ("- ENDED" OR "- STARTED" OR "PURGED --") | rex field=TEXT "(VVF119P -)(?<Func... See more...
Try something like this index=events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console (TERM(VVF119P)) ("- ENDED" OR "- STARTED" OR "PURGED --") | rex field=TEXT "(VVF119P -)(?<Function>[^\-]+)" | fillnull Function value=" PURGED" | eval DAT = strftime(relative_time(_time, "+0h"), "%Y/%m/%d") | rename DAT as Date_of_reception | table JOBNAME,Date_of_reception ,Function , _time | sort _time | eval {Function}_TIME=strftime(_time,"%H:%M:%S") | stats values(Date_of_reception) as Date_of_reception values(*_TIME) as *_TIME by JOBNAME
I apologize for being vague.   Was just trying to stick to the point. The source (LogStash) cloud server is CentOS and we have zero access or control beyond the initial setup happening over the next... See more...
I apologize for being vague.   Was just trying to stick to the point. The source (LogStash) cloud server is CentOS and we have zero access or control beyond the initial setup happening over the next few days.  We are not permitted to install ANY software on this server as it is externally hosted and is locked down.  I plan to try and force the issue of a UF install, but I expect to be unsuccessful.  In which case, LogStash is all we have. My entire environment is 60-70 UF's to an on-prem Indexer.  I have no LogStash or HEC experience.  I have a bad feeling about this...
Hello My lookup table has fields of src_ip, dst_ip, and description. src_ip=192.168.1.1 dst_ip=192.168.1.100 description="internal IP" I want to convert the src_ip field and dst_ip to decimal.... See more...
Hello My lookup table has fields of src_ip, dst_ip, and description. src_ip=192.168.1.1 dst_ip=192.168.1.100 description="internal IP" I want to convert the src_ip field and dst_ip to decimal. If you know how to convert it, please add a reply.   Thank you
Hi  Can you please let me know how i can display the below 3 rows in a single row :   Query :  index=events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console (TERM(VVF119... See more...
Hi  Can you please let me know how i can display the below 3 rows in a single row :   Query :  index=events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console (TERM(VVF119P)) ("- ENDED" OR "- STARTED" OR "PURGED --") | rex field=TEXT "(VVF119P -)(?<Function>[^\-]+)" | fillnull Function value=" PURGED" | eval DAT = strftime(relative_time(_time, "+0h"), "%Y/%m/%d") | rename DAT as Date_of_reception | table JOBNAME,Date_of_reception ,Function , _time | sort _time   I want to display the result in the below format:  | JOBNAME | Date_of_reception | STARTED_TIME | ENDED_TIME | PURGED_TIME| | $VVF119P | 2024/04/17 | 02:12:37 | 02:12:46 | 02:12:50   Thanks in advance. 
Hi everyone, I have a line chart which works perfectly but only for one single value: index=events ComputerName=* Account_Name=*** EventCode=$event_code_input$ | | timechart count by EventCode ... See more...
Hi everyone, I have a line chart which works perfectly but only for one single value: index=events ComputerName=* Account_Name=*** EventCode=$event_code_input$ | | timechart count by EventCode As you can see it reads EventCode as a user input. This is a multi-selection box.  So if the user selects:  4624 it plots the line - no issue But if they select 4624 AND 4625, it produces an error.    I've tried many different variations and chart types but no success.  Thanks  
A Couple of changes from your last image.  Notice the change in evaluating time variables.  now(), strptime instead of strftime: You could also remove the eval = aHostMatch... code if you are filter... See more...
A Couple of changes from your last image.  Notice the change in evaluating time variables.  now(), strptime instead of strftime: You could also remove the eval = aHostMatch... code if you are filtering the hosts in the initial TSTATS. | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.xxx.xxx) by host | eval currTime = now() ```<- I was not getting a value when using _time with TSTATS ? ``` | eval excluded_start_time=strptime("2024-03-16 18:25:00", "%Y-%m-%d %H:%M") | eval excluded_stop_time=strptime("2024-03-16 18:30:00", "%Y-%m-%d %H:%M") | eval is_maintenance_window=if(currTime >= excluded_start_time AND currTime <= excluded_stop_time,1,0) | eval aHostMatch = case( match(host,"HOSTNAME1"),1, ```<- Case Sensitive``` match(host,"HOSTNAME2"),1, ```<- Case Sensitive``` true(),0) ```| where count == 0 AND is_maintenance_window == 1 AND aHostMatch ==1``` | table host count excluded_start_time, currTime, excluded_stop_time, is_maintenance_window, aHostMatch Also, if a host is not reporting data (down) you will not have a row returned from your initial query and no row for that host for when you check ( where a count == 0 ) TSTATS does not support multiple timeframes... Another approach is to not use tstats and use a stats count First query: (earliest=-30m@m latest=-15m@m) to count historical entries, then a second query to get current entries (earliest=-14m@m latest=-1m@m), then compare historical counts and current counts by host index=cts-dcpsa-app host=HOSTNAME1 OR host=HOSTNAME2 earliest=-30m@m latest=-15m@m | stats count AS aHistCount by host | appendcols [ search index = cts-dcpsa-app host=HOSTNAME1 OR host=HOSTNAME2 earliest=-14m@m latest=-1m@m | stats count AS aCurrCount by host | table host, aCurrCount ] | table host, aHistCount, aCurrCount
Hello Splunkers!! I want to achieve below screenshot visualization.    Below is my current query : ====================================================== index=ABC sourcetype=Replenish... See more...
Hello Splunkers!! I want to achieve below screenshot visualization.    Below is my current query : ====================================================== index=ABC sourcetype=ReplenishmentOrderAssign OR sourcetype=ReplenishmentOrderCompleted OR sourcetype=ReplenishmentOrderStarted OR sourcetype=ReplenishmentOrderCancel | rex field=_raw "SenderFmInstanceName\>(?P<Workstation>[A-Za-z0-9]+\/[A-Za-z0-9]+)\<\/SenderFmInstanceName" | rename ReplenishmentOrderAssign.OrderId as OrderId | eval TimeAssigned=if(like(sourcetype,"%Assign"),_time,null) , TimeStarted=if(like(sourcetype,"%Started"),_time,null), TimeCompleted=if(like(sourcetype,"%Completed"),_time,null) | eventstats count(OrderId) as CountOrderTypes by OrderId | timechart span=5m count(TimeAssigned) as Assigned count(TimeStarted) as Started count(TimeCompleted) as Completed by Workstation | streamstats sum(*) | foreach "sum(Assigned:*)" [| eval <<MATCHSEG1>>Assigned='<<FIELD>>'-'sum(Completed:<<MATCHSEG1>>)'] | foreach "sum(Started:*)" [| eval <<MATCHSEG1>>Started='<<FIELD>>'-'sum(Completed:<<MATCHSEG1>>)'] | fields _time DEP* | foreach "DEP/*" [| eval <<MATCHSEG1>>=if('<<FIELD>>'>0,1,0)] | fields - DEP/* | foreach "*Assigned" [| eval <<FIELD>>='<<FIELD>>'-'<<MATCHSEG1>>Started'] | foreach "*Assigned" [| eval <<MATCHSEG1>>Idle=1-'<<FIELD>>'-'<<MATCHSEG1>>Started'] | addtotals *Started fieldname=Active | addtotals *Assigned fieldname=Assigned | addtotals *Idle fieldname=Idle | fields _time Idle Assigned Active | bin span=$span$ _time | eventstats sum(*) as * by _time | dedup _time Current query is giving me below visualization. Please help me where I need to change in the query to get the above visualization?  
When we start official Docker container image splunk/splunk:9.2.1 with extra var SPLUNK_DISABLE_POPUPS=true docker run -d -p 8000:8000 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=OUR... See more...
When we start official Docker container image splunk/splunk:9.2.1 with extra var SPLUNK_DISABLE_POPUPS=true docker run -d -p 8000:8000 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=OUR_PASS" -e "SPLUNK_DISABLE_POPUPS=true" --name splunk splunk/splunk:9.2.1 The Ansible task Disable Popups fails, with this error message: TASK [splunk_common : Disable Popups] ****************************************** changed: [localhost] => (item={'key': '/servicesNS/admin/user-prefs/data/user-prefs/general', 'value': 'hideInstrumentationOptInModal=1&notification_python_3_impact=false&showWhatsNew=0'}) failed: [localhost] (item={'key': '/servicesNS/nobody/splunk_instrumentation/admin/telemetry/general', 'value': 'showOptInModal=0&optInVersionAcknowledged=4'}) => { "ansible_loop_var": "item", "changed": false, "item": { "key": "/servicesNS/nobody/splunk_instrumentation/admin/telemetry/general", "value": "showOptInModal=0&optInVersionAcknowledged=4" } } MSG: POST/servicesNS/nobody/splunk_instrumentation/admin/telemetry/generaladmin********8089{'showOptInModal': '0&optInVersionAcknowledged=4'}NoneNone[200, 201, 409];;; AND excep_str: URL: https://127.0.0.1:8089/servicesNS/nobody/splunk_instrumentation/admin/telemetry/general; data: {"showOptInModal": "0&optInVersionAcknowledged=4"}, exception: API call for https://127.0.0.1:8089/servicesNS/nobody/splunk_instrumentation/admin/telemetry/general and data as {'showOptInModal': '0&optInVersionAcknowledged=4'} failed with status code 400: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Argument "{"showOptInModal": "0" is not supported by this handler.</msg> </messages> </response> , failed with status code 400: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Argument "{"showOptInModal": "0" is not supported by this handler.</msg> </messages> </response> failed: [localhost] (item={'key': '/servicesNS/admin/search/data/ui/ui-tour/search-tour', 'value': 'tourPage=search&viewed=1'}) => { "ansible_loop_var": "item", "changed": false, "item": { "key": "/servicesNS/admin/search/data/ui/ui-tour/search-tour", "value": "tourPage=search&viewed=1" } } MSG: POST/servicesNS/admin/search/data/ui/ui-tour/search-touradmin********8089{'tourPage': 'search&viewed=1'}NoneNone[200, 201, 409];;; AND excep_str: URL: https://127.0.0.1:8089/servicesNS/admin/search/data/ui/ui-tour/search-tour; data: {"tourPage": "search&viewed=1"}, exception: API call for https://127.0.0.1:8089/servicesNS/admin/search/data/ui/ui-tour/search-tour and data as {'tourPage': 'search&viewed=1'} failed with status code 400: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Argument "{"tourPage": "search" is not supported by this handler.</msg> </messages> </response> , failed with status code 400: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Argument "{"tourPage": "search" is not supported by this handler.</msg> </messages> </response>   Because of this the container fail to start. When the Disable Popups variable is not given Splunk starts without issue. Other Docker image versions, like splunk/splunk:9.2 doesn't have this issue.   Any help is appreciated.
Thanks for the reply @isoutamo.. I'll definently have a look at the .conf presentation! With regards to asking for the details from REST I've only been able to query details from the search heads ie... See more...
Thanks for the reply @isoutamo.. I'll definently have a look at the .conf presentation! With regards to asking for the details from REST I've only been able to query details from the search heads ie.  splunk_server=local by searching. I'm not sure I was clear on the reason behind my question but what I'm looking for is a way to for example to go to a dashboard to search for sourcetype=foo and find the props details which resides on the idxm/indexers peers.  So it's really a matter of being able to read current configuration without the "hassle" of logging on and reading files not making configuration changes. As for version control I have the data available in git but what I want it even more readily available directly in Splunk since that is the source after all.