All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Alan_Chan  You could change the From email address in Mail Server Settings in Email Settings : https://docs.splunk.com/Documentation/SplunkCloud/latest/Alert/Emailnotification  If you want to send... See more...
@Alan_Chan  You could change the From email address in Mail Server Settings in Email Settings : https://docs.splunk.com/Documentation/SplunkCloud/latest/Alert/Emailnotification  If you want to send each mail from a different "from" address, then probably sendemail command https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/SearchReference/Sendemail  First, check if you can modify the "Send emails as" field under Email Settings in your Splunk Cloud instance. If you can’t, or if the change doesn’t take effect (e.g., due to domain restrictions), then yes, you should raise a support ticket. Refer:- https://docs.splunk.com/Documentation/SplunkCloud/latest/Alert/Emailnotification#Steps_for_Splunk_Cloud_PlatformEmail notification action - Splunk Documentation  https://community.splunk.com/t5/Splunk-Enterprise-Security/How-to-change-the-quot-From-quot-address-when-an-alert-email-is/m-p/479230 
Sorry for being vague I am trying to build the app using the Splunk Add-On-Builder using a rest api call. The problem I am having is the logs are coming in, in one big blob and I have tried multiple ... See more...
Sorry for being vague I am trying to build the app using the Splunk Add-On-Builder using a rest api call. The problem I am having is the logs are coming in, in one big blob and I have tried multiple line_breaker options and tested them in regex101.  With respect to the streaming mode. I checked all the .py files associated with the app and could not find any instances of  <streaming_mode>xml</streaming_mode>  or  <streaming_mode>simple</streaming_mode>   in any of them. is it one of the cases where i have to add it?  Does Splunk default to XML?  
We received all alerts from Splunk Cloud with sender alerts@splunkcloud.com. Can we change the sender to other domain? E.g. xxx@xxx.abc Do we need to raise a support ticket to have a change reque... See more...
We received all alerts from Splunk Cloud with sender alerts@splunkcloud.com. Can we change the sender to other domain? E.g. xxx@xxx.abc Do we need to raise a support ticket to have a change request on it?  
I have a problem with the splunk classic dashboard that I have created, where the problem is that the table dashboard that I have created is not connected properly to the dropdown that I have create... See more...
I have a problem with the splunk classic dashboard that I have created, where the problem is that the table dashboard that I have created is not connected properly to the dropdown that I have created, as an example I provide the source of the dashboard that I have created as follows: <input type="text" token="end_id" searchWhenChanged="true"> <label>End To End Id</label> <default>*</default> </input> <input type="dropdown" token="code_cihub"> <label>Code Transaction CI HUB</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>code_cihub</fieldForLabel> <fieldForValue>code_cihub</fieldForValue> <search> <query>index="x" | where isnotnull(StatusTransactionBI) | eval "Status Transaction CI HUB" = if(StatusTransactionBI == "U000", "Success", "Failed") | lookup statust_description.csv code as StatusTransactionBI OUTPUT description | rename EndtoendIdOrgnlBI as "End To End Id", StatusTransactionBI as "Code Transaction CI HUB", description as "Description CI HUB" | dedup "End To End Id" | join type=outer "End To End Id" [search index="x" | where isnotnull(StatusTransactionOrgnl) | eval "Info Transaction CI HUB"=case(AddtionalOrgnl == "O 123", "Normal Transaction", AddtionalOrgnl == "O 70", "Velocity Transaction", AddtionalOrgnl == "O 71", "Gambling RFI", AddtionalOrgnl == "O 72", "Gambling OFI", AddtionalOrgnl == "O 73", "DTTOT Transaction", true(), "Other" ) | rename EndtoendIdOrgnl as "End To End Id" | search "Info Transaction CI HUB"="$info$" ] | search "End To End Id"="$end_id$" "Status Transaction CI HUB"="$status_cihub$" | stats count by "Code Transaction CI HUB" | rename "Code Transaction CI HUB" as code_cihub</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> </input> <input type="dropdown" token="info"> <label>Info Transaction CI HUB</label> <choice value="*">All</choice> <choice value="O 70">Velocity Transaction</choice> <choice value="O 71">Gambling RFI</choice> <choice value="O 72">Gambling OFI</choice> <choice value="O 73">DTTOT Transaction</choice> <default>*</default> <fieldForLabel>info</fieldForLabel> <fieldForValue>info</fieldForValue> <search> <query>index="x" | where isnotnull(StatusTransactionBI) | eval "Status Transaction CI HUB" = if(StatusTransactionBI == "U000", "Success", "Failed") | lookup statust_description.csv code as StatusTransactionBI OUTPUT description | rename EndtoendIdOrgnlBI as "End To End Id", StatusTransactionBI as "Code Transaction CI HUB", description as "Description CI HUB" | dedup "End To End Id" | join type=outer "End To End Id" [search index="x" | where isnotnull(StatusTransactionOrgnl) | eval "Info Transaction CI HUB"=case(AddtionalOrgnl == "O 123", "Normal Transaction", AddtionalOrgnl == "O 70", "Velocity Transaction", AddtionalOrgnl == "O 71", "Gambling RFI", AddtionalOrgnl == "O 72", "Gambling OFI", AddtionalOrgnl == "O 73", "DTTOT Transaction", true(), "Other" ) | rename EndtoendIdOrgnl as "End To End Id" ] | search "End To End Id"="$end_id$" "Status Transaction CI HUB"="$status_cihub$" | stats count by "Info Transaction CI HUB" | rename "Info Transaction CI HUB" as info</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> </input> <input type="dropdown" token="status_cihub"> <label>Status Transaction CI HUB</label> <choice value="*">All</choice> <default>*</default> <fieldForLabel>status_cihub</fieldForLabel> <fieldForValue>status_cihub</fieldForValue> <search> <query>index="x" | where isnotnull(StatusTransactionBI) | eval "Status Transaction CI HUB" = if(StatusTransactionBI == "U000", "Success", "Failed") | lookup statust_description.csv code as StatusTransactionBI OUTPUT description | rename EndtoendIdOrgnlBI as "End To End Id", StatusTransactionBI as "Code Transaction CI HUB", description as "Description CI HUB" | dedup "End To End Id" | join type=outer "End To End Id" [search index="x" | where isnotnull(StatusTransactionOrgnl) | eval "Info Transaction CI HUB"=case(AddtionalOrgnl == "O 123", "Normal Transaction", AddtionalOrgnl == "O 70", "Velocity Transaction", AddtionalOrgnl == "O 71", "Gambling RFI", AddtionalOrgnl == "O 72", "Gambling OFI", AddtionalOrgnl == "O 73", "DTTOT Transaction", true(), "Other" ) | rename EndtoendIdOrgnl as "End To End Id" | search "Info Transaction CI HUB"="$info$" ] | search "End To End Id"="$end_id$" "Code Transaction CI HUB"="$code_cihub$" | stats count by "Status Transaction CI HUB" | rename "Status Transaction CI HUB" as status_cihub</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> </input> <row> <panel> <table> <title>Monitoring Response</title> <search> <query>index="x" | where isnotnull(StatusTransactionBI) | eval "Status Transaction CI HUB" = if(StatusTransactionBI == "U000", "Success", "Failed") | lookup statust_description.csv code as StatusTransactionBI OUTPUT description | rename EndtoendIdOrgnlBI as "End To End Id", StatusTransactionBI as "Code Transaction CI HUB", description as "Description CI HUB" | dedup "End To End Id" | join type=outer "End To End Id" [search index="x" | where isnotnull(StatusTransactionOrgnl) | eval "Info Transaction CI HUB"=case(AddtionalOrgnl == "O 123", "Normal Transaction", AddtionalOrgnl == "O 70", "Velocity Transaction", AddtionalOrgnl == "O 71", "Gambling RFI", AddtionalOrgnl == "O 72", "Gambling OFI", AddtionalOrgnl == "O 73", "DTTOT Transaction", true(), "Other" ) | rename EndtoendIdOrgnl as "End To End Id" | search "Info Transaction CI HUB"="$info$" ] | search "End To End Id"="$end_id$" "Code Transaction CI HUB"="$code_cihub$" "Status Transaction CI HUB"="$status_cihub$" | table _time, "End To End Id", "Code Transaction CI HUB", "Info Transaction CI HUB", "Status Transaction CI HUB", "Description CI HUB" | sort - _time</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> </search> </table> </panel> </row> the main problem I'm facing is, on the “Info Transaction CI HUB” dropdown that I made static, where if I select one of the values, the contents in the “Monitoring Response” table do not change according to the dropdown value of “Info Transaction CI HUB” that I have selected before. please help me to solve the problem Thank you
@nieminej  Instead of reloading the entire deployment server, you can reload specific server classes. This can be done using the command ./splunk reload deploy-server -class <serverclass name>. This... See more...
@nieminej  Instead of reloading the entire deployment server, you can reload specific server classes. This can be done using the command ./splunk reload deploy-server -class <serverclass name>. This way, only the changes made to the corresponding deployment apps will get reloaded, reducing the load on the deployment server.  Instead of immediate reload deploy-server, use a scheduled reload (e.g., via cron) to batch updates: In this situation, we can’t designate a specific server class.  With multiple server classes involved, setting up individual cron jobs for each one becomes impractical. 0 * * * * /opt/splunk/bin/splunk reload deploy-server -auth admin:<password> The shared app stays static, and GUI changes only affect serverclass membership, not the app bundle itself. Batching reloads reduces DS load frequency Reload deployment server via API/Splunk SDK https://community.splunk.com/t5/Splunk-Dev/With-the-Splunk-Python-SDK-how-do-I-reload-deploy-client-with-2/m-p/438775  App: Adds a simple |reloadds search command to reload deployment server configs from disk, as well as an alert action which does the same thing. Another app in splunkbase to reload DS https://splunkbase.splunk.com/app/7339  Check this for more details: https://community.splunk.com/t5/Getting-Data-In/Deployment-Server-reload-configs-without-restarting-splunk/m-p/124423  REST call: curl -ku admin:your_password https://your_splunk_server:8089/servicesNS/-/system/deployment/server/config/_reload  Use the URL below, substituting your host name for myHostName and the serverClass you want to reload for serverClassName https://myHostName:8089/services/deployment/server/serverclasses/serverClassName/reload 
Hi Victor, i hope you are having a good week so far. it seems like you know what i don't. we are using the addin between snow and splunk. we use api for both systems to integrate.  for this purpo... See more...
Hi Victor, i hope you are having a good week so far. it seems like you know what i don't. we are using the addin between snow and splunk. we use api for both systems to integrate.  for this purpose i created an account in snow with access to the splunk and incident table that cannot logging using "user interface". when we test the creation of an incident from splunk interface, oauth2 works fine but then, in addition, it uses the account of the person running the test to logging to servicenow. i thought that oauth2 would be sufficient. why would it ask for another user/pwd? regards, Max
I wanted to add same base configuration for workstations and have serverclasses divided by organizations but base app would be same on everyone. Now I have problem:  When you make changes (add host... See more...
I wanted to add same base configuration for workstations and have serverclasses divided by organizations but base app would be same on everyone. Now I have problem:  When you make changes (add host through webgui to one serverclass) and click save, it changes bundle epoch time under global_bundles and then other serverclasses say that file does not exist on server when clients try to download app. And then if I run reload deploy-server it's fine again. But everytime if I need to add client on any workstation serverclass it breaks all other serverclasses. It's pretty rough to run reload deploy-server command everytime because there will be pretty high load on the DS.  Is there any other way to handle this than making class-specific base apps? Running 9.4.1 12vCPU/12GB RAM. 
In that case, the existing data should not have been lost! Assuming you are using the same login, with the same permissions then there shouldnt be an issue with RBAC/permissions. If you didnt make ... See more...
In that case, the existing data should not have been lost! Assuming you are using the same login, with the same permissions then there shouldnt be an issue with RBAC/permissions. If you didnt make changes to indexes.conf then the index should still exist. You mentioned the retention is set to 6 months - I assume you have been getting data in fairly recently? Are you seeing data in other indexes from other inputs? Can you see your forwarders sending their _internal logs? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Well we ended up to break down the deployment cluster and use dedicated DS and divide clients to then because we have a deadline which need to reach.    We figured out that problem might be in NFS ... See more...
Well we ended up to break down the deployment cluster and use dedicated DS and divide clients to then because we have a deadline which need to reach.    We figured out that problem might be in NFS shared drive which issued that DS1 had only working hashes and everytime client phoned to DS2 it lost apps because mismatch on checksums and it had no reference to any apps for that specific client and then just uninstalled it. Don't know for sure but its not a significant problem anymore because we changed the architecture.    
We actually built this exact thing for the community. It will email you when a new version of Splunk Enterprise is released: https://spectakl.io/resources/watch Mods if this is not okay, please r... See more...
We actually built this exact thing for the community. It will email you when a new version of Splunk Enterprise is released: https://spectakl.io/resources/watch Mods if this is not okay, please remove. We are not affiliated, endorsed, or sponsored by Splunk LLC.
Ah yes okay, that is Classic only, sorry I didnt realise you were wanting Dashboard Studio!
Technically there is a 3rd option (and often with Splunk there may be a 4th), but this example shows you how to first detect errors and then mark the events that fit within the window required of tha... See more...
Technically there is a 3rd option (and often with Splunk there may be a 4th), but this example shows you how to first detect errors and then mark the events that fit within the window required of that error. It creates 40 random events with an occasional error then it basically copies the error time up and down the non-error events and then filters those that match the time window of the closest error. | makeresults count=40 | streamstats c | eval _time=now() - c*20 | eval log_data=if(c % (random() % 30) = 0, "bla error message bla", "normal event message") | fields - c ``` The above creates a simple 40 event data set with an occasional error ``` ``` Ensure time descending order and mark the events that have an error ``` | sort - _time | streamstats window=1 values(eval(if(match(log_data,"error"), _time, null()))) as error_time ``` Save the error time and copy the error time down to all following records until the next error ``` | eval start_time=error_time | filldown error_time ``` Now filter events within 60 seconds prior to the error ``` | eval INCLUDE=if(_time>=(error_time-60) AND _time<=error_time, "YES", "NO") ``` Now do the same in reverse, i.e. time ascending order ``` | sort _time | filldown start_time ``` and filter events that are within 60 seconds AFTER the error ``` | eval INCLUDE=if(_time<=(start_time+60) AND _time>=start_time, "YES", INCLUDE) | fields - start_time error_time Bear in mind that this could be an expensive search as it does 2 sorts and 2 streamstats, but in your case you could do  index=project1 sourcetype=pc1 followed by the SPL after the data setup above.  
Hi @mark_groenveld  Is that the full event or a field in your event? Is the whole event JSON? If possible please give some full examples. Are the names of the 3 cluster always CLUSTER followed by a... See more...
Hi @mark_groenveld  Is that the full event or a field in your event? Is the whole event JSON? If possible please give some full examples. Are the names of the 3 cluster always CLUSTER followed by a single character? Thanks Will
I'd go the other way around - either extract the values into separate fields or use tokenizer to split the field into multiple values. Searching for wildcards at the beginning of the search term is v... See more...
I'd go the other way around - either extract the values into separate fields or use tokenizer to split the field into multiple values. Searching for wildcards at the beginning of the search term is very ineffective.
 Did anyone ever solve this? I am having the same issue with 4.2.0
I maintain an app on Splunk, the AbuseIPDB App. This app uses a collection that holds a set of key-value pairs for things like user state and settings, and it's looked up on every command (i.e. abuse... See more...
I maintain an app on Splunk, the AbuseIPDB App. This app uses a collection that holds a set of key-value pairs for things like user state and settings, and it's looked up on every command (i.e. abuseipdbcheck ip="127.0.0.1"). We had been receiving bug reports about a KeyError that seemed to have been fixed by setting replicate=true for the collection. I suppose that because the app's configuration collection was not being replicated, distributed searches failed (since the configuration collection was not being found on the individual search peers?, hence the KeyError). However, I've just received another report, with the same issue, this time from a Splunk Cloud Victoria setup. The collection does have replicate=true. Can anyone give some guidance on this?
Turns out the solution was simpler than I thought. The multiselect is populated from a query.  Within that query I just created another field that took the tags and added wildcard characters to the ... See more...
Turns out the solution was simpler than I thought. The multiselect is populated from a query.  Within that query I just created another field that took the tags and added wildcard characters to the front and back.   <base search> | eval TAGS = split(TAGS, ",") | mvexpand TAGS | dedup TAGS | table TAGS | eval TAGS_WILDCARD = "*" + TAGS + "*" | sort TAGS With this, I mapped TAGS to the dynamic menu label field, and TAGS_WILDCARD to the dynamic menu value field.  I was then able to use the token filter "|s" to wrap each value in quotes. Ultimately, I ended up with this <base search> | search TAGS IN ($includeTag|s$) AND TAGS NOT ($excludeTag|s$)
Sure. Here are examples of the values. {"CLUSTER1.COM","viewSiteAsUser.hasAccess":true} {"CLUSTER_VIP":"CLUSTER1.COM"}
I'm not sure what you mean by "extract" in this context. Do you have your fields extracted already and used it in a different meaning or do you want to extract values from the raw data? Give us a bit... See more...
I'm not sure what you mean by "extract" in this context. Do you have your fields extracted already and used it in a different meaning or do you want to extract values from the raw data? Give us a bit more example events and describe what would be the result (based on that example data) and why.
I am searching for a key:value report app where the values are inconsistent but include a report cluster name consistently. Example of key:value APP_Details:{"CLUSTER_VIP":"CLUSTERX.URL.COM","Acces... See more...
I am searching for a key:value report app where the values are inconsistent but include a report cluster name consistently. Example of key:value APP_Details:{"CLUSTER_VIP":"CLUSTERX.URL.COM","Access":true} There are over 100 APP_Details values for CLUSTERX. How can I extract CLUSTERX (there are three different cluster names) to show as a single value by cluster? Thanks