All Posts

Top

All Posts

A Couple of changes from your last image.  Notice the change in evaluating time variables.  now(), strptime instead of strftime: You could also remove the eval = aHostMatch... code if you are filter... See more...
A Couple of changes from your last image.  Notice the change in evaluating time variables.  now(), strptime instead of strftime: You could also remove the eval = aHostMatch... code if you are filtering the hosts in the initial TSTATS. | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.xxx.xxx) by host | eval currTime = now() ```<- I was not getting a value when using _time with TSTATS ? ``` | eval excluded_start_time=strptime("2024-03-16 18:25:00", "%Y-%m-%d %H:%M") | eval excluded_stop_time=strptime("2024-03-16 18:30:00", "%Y-%m-%d %H:%M") | eval is_maintenance_window=if(currTime >= excluded_start_time AND currTime <= excluded_stop_time,1,0) | eval aHostMatch = case( match(host,"HOSTNAME1"),1, ```<- Case Sensitive``` match(host,"HOSTNAME2"),1, ```<- Case Sensitive``` true(),0) ```| where count == 0 AND is_maintenance_window == 1 AND aHostMatch ==1``` | table host count excluded_start_time, currTime, excluded_stop_time, is_maintenance_window, aHostMatch Also, if a host is not reporting data (down) you will not have a row returned from your initial query and no row for that host for when you check ( where a count == 0 ) TSTATS does not support multiple timeframes... Another approach is to not use tstats and use a stats count First query: (earliest=-30m@m latest=-15m@m) to count historical entries, then a second query to get current entries (earliest=-14m@m latest=-1m@m), then compare historical counts and current counts by host index=cts-dcpsa-app host=HOSTNAME1 OR host=HOSTNAME2 earliest=-30m@m latest=-15m@m | stats count AS aHistCount by host | appendcols [ search index = cts-dcpsa-app host=HOSTNAME1 OR host=HOSTNAME2 earliest=-14m@m latest=-1m@m | stats count AS aCurrCount by host | table host, aCurrCount ] | table host, aHistCount, aCurrCount
Hello Splunkers!! I want to achieve below screenshot visualization.    Below is my current query : ====================================================== index=ABC sourcetype=Replenish... See more...
Hello Splunkers!! I want to achieve below screenshot visualization.    Below is my current query : ====================================================== index=ABC sourcetype=ReplenishmentOrderAssign OR sourcetype=ReplenishmentOrderCompleted OR sourcetype=ReplenishmentOrderStarted OR sourcetype=ReplenishmentOrderCancel | rex field=_raw "SenderFmInstanceName\>(?P<Workstation>[A-Za-z0-9]+\/[A-Za-z0-9]+)\<\/SenderFmInstanceName" | rename ReplenishmentOrderAssign.OrderId as OrderId | eval TimeAssigned=if(like(sourcetype,"%Assign"),_time,null) , TimeStarted=if(like(sourcetype,"%Started"),_time,null), TimeCompleted=if(like(sourcetype,"%Completed"),_time,null) | eventstats count(OrderId) as CountOrderTypes by OrderId | timechart span=5m count(TimeAssigned) as Assigned count(TimeStarted) as Started count(TimeCompleted) as Completed by Workstation | streamstats sum(*) | foreach "sum(Assigned:*)" [| eval <<MATCHSEG1>>Assigned='<<FIELD>>'-'sum(Completed:<<MATCHSEG1>>)'] | foreach "sum(Started:*)" [| eval <<MATCHSEG1>>Started='<<FIELD>>'-'sum(Completed:<<MATCHSEG1>>)'] | fields _time DEP* | foreach "DEP/*" [| eval <<MATCHSEG1>>=if('<<FIELD>>'>0,1,0)] | fields - DEP/* | foreach "*Assigned" [| eval <<FIELD>>='<<FIELD>>'-'<<MATCHSEG1>>Started'] | foreach "*Assigned" [| eval <<MATCHSEG1>>Idle=1-'<<FIELD>>'-'<<MATCHSEG1>>Started'] | addtotals *Started fieldname=Active | addtotals *Assigned fieldname=Assigned | addtotals *Idle fieldname=Idle | fields _time Idle Assigned Active | bin span=$span$ _time | eventstats sum(*) as * by _time | dedup _time Current query is giving me below visualization. Please help me where I need to change in the query to get the above visualization?  
When we start official Docker container image splunk/splunk:9.2.1 with extra var SPLUNK_DISABLE_POPUPS=true docker run -d -p 8000:8000 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=OUR... See more...
When we start official Docker container image splunk/splunk:9.2.1 with extra var SPLUNK_DISABLE_POPUPS=true docker run -d -p 8000:8000 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=OUR_PASS" -e "SPLUNK_DISABLE_POPUPS=true" --name splunk splunk/splunk:9.2.1 The Ansible task Disable Popups fails, with this error message: TASK [splunk_common : Disable Popups] ****************************************** changed: [localhost] => (item={'key': '/servicesNS/admin/user-prefs/data/user-prefs/general', 'value': 'hideInstrumentationOptInModal=1&notification_python_3_impact=false&showWhatsNew=0'}) failed: [localhost] (item={'key': '/servicesNS/nobody/splunk_instrumentation/admin/telemetry/general', 'value': 'showOptInModal=0&optInVersionAcknowledged=4'}) => { "ansible_loop_var": "item", "changed": false, "item": { "key": "/servicesNS/nobody/splunk_instrumentation/admin/telemetry/general", "value": "showOptInModal=0&optInVersionAcknowledged=4" } } MSG: POST/servicesNS/nobody/splunk_instrumentation/admin/telemetry/generaladmin********8089{'showOptInModal': '0&optInVersionAcknowledged=4'}NoneNone[200, 201, 409];;; AND excep_str: URL: https://127.0.0.1:8089/servicesNS/nobody/splunk_instrumentation/admin/telemetry/general; data: {"showOptInModal": "0&optInVersionAcknowledged=4"}, exception: API call for https://127.0.0.1:8089/servicesNS/nobody/splunk_instrumentation/admin/telemetry/general and data as {'showOptInModal': '0&optInVersionAcknowledged=4'} failed with status code 400: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Argument "{"showOptInModal": "0" is not supported by this handler.</msg> </messages> </response> , failed with status code 400: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Argument "{"showOptInModal": "0" is not supported by this handler.</msg> </messages> </response> failed: [localhost] (item={'key': '/servicesNS/admin/search/data/ui/ui-tour/search-tour', 'value': 'tourPage=search&viewed=1'}) => { "ansible_loop_var": "item", "changed": false, "item": { "key": "/servicesNS/admin/search/data/ui/ui-tour/search-tour", "value": "tourPage=search&viewed=1" } } MSG: POST/servicesNS/admin/search/data/ui/ui-tour/search-touradmin********8089{'tourPage': 'search&viewed=1'}NoneNone[200, 201, 409];;; AND excep_str: URL: https://127.0.0.1:8089/servicesNS/admin/search/data/ui/ui-tour/search-tour; data: {"tourPage": "search&viewed=1"}, exception: API call for https://127.0.0.1:8089/servicesNS/admin/search/data/ui/ui-tour/search-tour and data as {'tourPage': 'search&viewed=1'} failed with status code 400: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Argument "{"tourPage": "search" is not supported by this handler.</msg> </messages> </response> , failed with status code 400: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Argument "{"tourPage": "search" is not supported by this handler.</msg> </messages> </response>   Because of this the container fail to start. When the Disable Popups variable is not given Splunk starts without issue. Other Docker image versions, like splunk/splunk:9.2 doesn't have this issue.   Any help is appreciated.
Thanks for the reply @isoutamo.. I'll definently have a look at the .conf presentation! With regards to asking for the details from REST I've only been able to query details from the search heads ie... See more...
Thanks for the reply @isoutamo.. I'll definently have a look at the .conf presentation! With regards to asking for the details from REST I've only been able to query details from the search heads ie.  splunk_server=local by searching. I'm not sure I was clear on the reason behind my question but what I'm looking for is a way to for example to go to a dashboard to search for sourcetype=foo and find the props details which resides on the idxm/indexers peers.  So it's really a matter of being able to read current configuration without the "hassle" of logging on and reading files not making configuration changes. As for version control I have the data available in git but what I want it even more readily available directly in Splunk since that is the source after all.
A very useful suggestion @scelikok, it is something new I learned. Thanks. Executing this query I got some result; message says that "app bundle download has started and completed". The only think I... See more...
A very useful suggestion @scelikok, it is something new I learned. Thanks. Executing this query I got some result; message says that "app bundle download has started and completed". The only think I don't know if it's right, is that host field is populated with DS hostname and not the log source one. By the way, this lead me to agree with you about your last consideration: there must be some error in path/filename provided. We are going to check those parameter.  
Hello, I am building a custom alert action for advanced webhook functionality (allowing header values, removing some data from the payload etc.) and I want to validate the url provided in the config... See more...
Hello, I am building a custom alert action for advanced webhook functionality (allowing header values, removing some data from the payload etc.) and I want to validate the url provided in the config of the alert to be one of the listed in webhook allowed urls. There is a standard list of allowed urls for Splunk standard action Webhook which I want to use. Do you know how can I pull the list of allowed webhook urls (patterns) from my python code? I want to reuse the existing configuration instead of creating a custom list of allowed patterns. Only admin should be able to modify this list, whereas the URL for each alert is created by the user. Thanks!
I need to bring events related to creating and changing a user in the application to this CIM (Change->Account Management). To do this, I need the following values to be specified in the action field... See more...
I need to bring events related to creating and changing a user in the application to this CIM (Change->Account Management). To do this, I need the following values to be specified in the action field - acl_modified, cleared, created, deleted, modified, stopped, lockout, read, logoff, updated, started, restarted, unlocked according to this documentation. The problem is that the action field already exists in events with the following values - create, delete and it is used not only to describe actions with users but also for other objects. What method can you recommend to make the field CIM compliant? Event example: { [-] action: delete actor_details: { [+] } actor_uuid: 11111111 location: { [+] } object_details: { [+] } object_type: user #Also can be item, vault, etc object_uuid: 333333333 session: { [+] } timestamp: 33213123 uuid: 4444444 }  
Thanks for the reply @richgalloway.. I will have a look at the app in more detail as I have only lightly browsed it in the past.   If it dosen't fill the criteria for what I'm looking for in this ins... See more...
Thanks for the reply @richgalloway.. I will have a look at the app in more detail as I have only lightly browsed it in the past.   If it dosen't fill the criteria for what I'm looking for in this instance it looks to be a nice tool to have in the arsenal regardless.
If there is a file read permission error you should have seen in _internal logs.  You can check if app is installed on your host using below query; index=_internal component=PackageDownloadRestHandl... See more...
If there is a file read permission error you should have seen in _internal logs.  You can check if app is installed on your host using below query; index=_internal component=PackageDownloadRestHandler host=YourHost app=YourAppName On my experience, most of the problems on this kind of blind configurations is given pathname or filename is wrong.  And please remember file inputs are case-sensitive.  
Hi @scelikok yes, it's first check I performed; restart splunkd is correctly flagged
Hi @SplunkExplorer, Did you check "Restart Splunkd" option for your new input app on app settings? Splunk Forwarder needs to be restarted for the new inputs. .  
Hi Splunkers,  we have a Windows log source with a UF installed on it. We have no access to this log source: we only know that we collect Windows logs via UF and it works properly. Collected logs ar... See more...
Hi Splunkers,  we have a Windows log source with a UF installed on it. We have no access to this log source: we only know that we collect Windows logs via UF and it works properly. Collected logs are the usual one: Security, Applications, and so on. Starting from today, we need to add a monitor input: some files are stored in a folder and we need to collect them. So, on our DS, we created another app, inside deployment-app folder, with a proper inputs.conf and props.conf and then we deployed it. Why we created another app and does not simply added a monitor stanza in inputs.conf for Windows addon? Simply because Windows addon is deployed on many host; on the other side, we need to monitor the path only on 1 specific host, so we preferred to deploy another dedicated app, with its server class and so on. DS give no error; app is shown as deployed with no issues. At the same time, we got no error looking on splunkd.log and/or _internal index. By the way, logs are not collected. For sure, we are going to reach Host owner and perform basic checks, like: Is provided path the right one? User in charge of execute UF has read permission on that folder? In UF app folder, is the one deployed by us viewable?  But before this, there is a doubt I have: above point 2, in case of permission denied, I should see in _internal logs some error message, right? Because currently I don't see any error message related to this issue. The behavior is like the inputs.conf we set in deployment app is totally ignored: searching on _internl and/or splunkd.log, I cannot see anything related to path we have to monitor.
Hello ,   I have put the smtp server name in my email settings in splunk...but the issue is a bit complex , all the previous alerts/reports are coming on time which are created on splunk but only t... See more...
Hello ,   I have put the smtp server name in my email settings in splunk...but the issue is a bit complex , all the previous alerts/reports are coming on time which are created on splunk but only the one created by me lately are not coming ..   Any suggestions?
Hi, Can we do the same for BMC remedy add-on? Does the BMC integration work as an ad hoc adaptive response?  
Hi all,  I am trying to set up SSE (v3.8.0) however all the searches that are using sseanalytics are failing. ERROR SearchMessages - orig_component="script" app="Splunk_Security_Essentials" sid... See more...
Hi all,  I am trying to set up SSE (v3.8.0) however all the searches that are using sseanalytics are failing. ERROR SearchMessages - orig_component="script" app="Splunk_Security_Essentials" sid="1713345298.75124_FB5E91CC-FD94-432D-8605-815038CDF897" message_key="EXTERN:SCRIPT_NONZERO_RETURN" message=External search command 'sseanalytics' returned error code 1. .
hi guys! could you recommend better way to archiving logs from k8s to S3 bucket?  maybe better to write custom script or use some Splunk tools (like Hadoop)
@harishlnu one way I have done this in the past is to use 2 lots of automation.  1st automation sends the email with a key nested in the HTML such as "SOARKEY=" and then add useful information to th... See more...
@harishlnu one way I have done this in the past is to use 2 lots of automation.  1st automation sends the email with a key nested in the HTML such as "SOARKEY=" and then add useful information to the other side, usually b64 encoded, at a minimum having the original container id in it. This sends the email and then stops automation.  Then you ingest the replies (you may need to setup mail rules to push replies to SOAR emails into a dedicated inbox) using SOAR and look for the SOARKEY in the HTML Body, get the encoded string, decode and then <do something>.   -- Hope this helps! Happy SOARing! -- 
Hi here is order how those are managed in search time https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Searchtimeoperationssequence You should ensure that this field has defined before... See more...
Hi here is order how those are managed in search time https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Searchtimeoperationssequence You should ensure that this field has defined before you can use those e.g. in transforms.conf. E.g. if you are using ALIAS-field1 on props.conf you cannot use that field1 as  a SOURCE_KEY in props.conf. In this kind of situation you should extract that information from _raw instead of field which has defined on later phase of input sequence. I'm not sure about your event.url field is same as this TA has defined or not. If it's then you can see in props.conf that it has defined like EVAL-url = Host+URL and if this is your event.url field then it didn't  exists yet when you try to use it on transforms.conf. r. Ismo
Hi Team, I am having requirement where i need to send an email for approval, if they reply to that email i need to read that email and continue with next steps. Could you please help me with your s... See more...
Hi Team, I am having requirement where i need to send an email for approval, if they reply to that email i need to read that email and continue with next steps. Could you please help me with your suggestions. Thanks in Advance