All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@gcusello  I was able to get the desired output with inner join
I'm creating it from scratch, 8080,9887 is not in listen state. i'm unable to connect to master from peer ProxyConfig  - Failed to initialize http_proxy from server.conf for splunkd. P lease mak... See more...
I'm creating it from scratch, 8080,9887 is not in listen state. i'm unable to connect to master from peer ProxyConfig  - Failed to initialize http_proxy from server.conf for splunkd. P lease make sure that the http_proxy property is set as http_proxy=http://host:port in case HTTP proxying needs to be enabled. INFO ProxyConfig  - Failed to initialize https_proxy from server.conf for splunkd. Please make sure that the https_proxy property is set as https_proxy=http://host:port in case HTTP proxying needs to be enabled. INFO ProxyConfig - Failed to initialize the proxy_rules setting from server.conf f or splunkd. Please provide a valid set of proxy_rules in case HTTP proxying needs to be enabled. INFO ProxyConfig - Failed to initialize the no_proxy setting from server.conf for splunkd. Please provide a valid set of no_proxy rules in case HTTP proxying needs to be enabled. INFO WatchedFile  - File too small to check seekcrc, probably truncated. Will re-re ad entire file='/opt/splunk/var/log/introspection/http_event_collector_metrics.log'. WARN SSLOptions  - server.conf/[search_state]/sslVerifyServerCert is false disab ling certificate validation; must be set to "true" for increased security  WARN SSLOptions- <internal>.conf/[<internal>]/sslVerifyServerCert is false disabling certificate validation; must be set to "true" for increased security  IntrospectionGenerator:resource_usage - RU_main - I-data gathering (Resource Usage) starting; period=10s
Can you add your whole SPL query here, as @ITWhisperer said, your example didn't contains any fields which have value Radius.
Hello experts. I'm a Splunk newbie. I am using the Jira Service Desk simple AddOn to send Splunk alarms to Jira tickets. We also confirmed that Splunk alarms were successfully raised through Jira... See more...
Hello experts. I'm a Splunk newbie. I am using the Jira Service Desk simple AddOn to send Splunk alarms to Jira tickets. We also confirmed that Splunk alarms were successfully raised through Jira tickets. However, sometimes it is the same alarm, but a specific alarm receives the customfield value well, but there are cases where no value is retrieved.   In Splunk, it is confirmed that the value exists, but the value cannot be retrieved. No matter how much I searched, I couldn't find out what the reason was. If the Jira and Splunk field mappings are incorrect, you shouldn't be able to get the value from all tickets, but you can't get it from a specific ticket... What's the problem? Example here... The Client value is always a value. However, as shown in the images, the customer value does not exist.  [Error Ticket]   [Normal Ticket]        
Have this cluster work earlier or are you just putting it up from scratch? Did you see any connections attemps in master's internal logs? Are there any other messages in peer's internal logs? Can y... See more...
Have this cluster work earlier or are you just putting it up from scratch? Did you see any connections attemps in master's internal logs? Are there any other messages in peer's internal logs? Can you use e.g. curl or nc to connect masters management port from peer?
hi @isoutamo  yes, this is on indexer cluster - on all 3 nodes ID1,ID2,ID3 yes, master node is up and running, I can see no error logs from splunkd.log from master
Hi Was this on indexer? Have you checked that master node is up and running? If it's up and running, there should be some more hints on its internal logs. Just check those to get more hints. r. Ismo
hi @phanichintha , is this issue resolved ? if yes, could you post you solution, even I am facing the same issue https://community.splunk.com/t5/Deployment-Architecture/ERROR-CMSlave-Waiting-for-the-... See more...
hi @phanichintha , is this issue resolved ? if yes, could you post you solution, even I am facing the same issue https://community.splunk.com/t5/Deployment-Architecture/ERROR-CMSlave-Waiting-for-the-cluster-manager-to-come-up/m-p/704380#M28826
Anyone don't have same problem here ? 
hi guys After adding [clustering] stanza and [replication_port://9887] in indexer cluster,  getting the below error ERROR  - Waiting for the cluster manager to come up... (retrying every second),bu... See more...
hi guys After adding [clustering] stanza and [replication_port://9887] in indexer cluster,  getting the below error ERROR  - Waiting for the cluster manager to come up... (retrying every second),but  service is running, but it stuck at this point - Waiting for web server at http://<ip>:8000 to be available. later, got this warning, WARNING: web interface does not seem to be available! how to I fix this issue ? 8000,9887 ports are already open, I've passed same pass4SymmKey for all 3 servers in indexer cluster tags: @any  
Our add-on requires dateparser and within dateparser the regex library is required. I added the libraries to my addon (built with Add-on Builder 4.3) however, it fails in Splunk 9.2 with: File "/opt... See more...
Our add-on requires dateparser and within dateparser the regex library is required. I added the libraries to my addon (built with Add-on Builder 4.3) however, it fails in Splunk 9.2 with: File "/opt/splunk/etc/apps/.../regex/_regex_core.py", line 21, in <module> import regex._regex as _regex ModuleNotFoundError: No module named 'regex._regex' If I try on Splunk 9.3 it works fine.  I know Python version changed from 3.7 to 3.9 on Splunk 9.3 but the regex version 2024.4.16 seems to be good for Python 3.7 I will appreciate any insight on how to solve this issue.
Several problems with the illustrated search.  Most important one is use of join.  This is rarely the solution to any problem in Splunk.  Then, there is the problem of bad quotation marks. Even with... See more...
Several problems with the illustrated search.  Most important one is use of join.  This is rarely the solution to any problem in Splunk.  Then, there is the problem of bad quotation marks. Even without this, like @ITWhisperer says, illustrating useful data input is the best way to enable volunteers to help you.  Your illustrated search syntax is so mixed up I cannot even tell whether the two searches are using the same index. Without any assumption about which data source(s) are used, you can get desired result using append and stats, assuming that my speculation about your syntax reflects the correct filter. index=source "status for : *" "Not available" | rex field=_raw "status for : (?<ORDERS>.*?)" | fields ORDERS | dedup ORDERS | eval status = "Not available" | append [search Message="Request for : *" | rex field=_raw "data=[A-Za-z0-9-]+\|(?P<ORDERS>[\w\.]+)" | rex field=_raw "\"unique\"\:\"(?P<UNIQUEID>[A-Z0-9]+)\"" | fields ORDERS UNIQUEID | dedup ORDERS UNIQUEID] | stats values(*) as * by ORDERS | where status == "Not available" | fields - status Note in search command, AND between terms is implied and rarely need to be spelled out. Now, if the two searches use the same index, it is perhaps more efficient to NOT use append. (Much less join.)  Instead, combine the two in one search. index=source (("status for : *" "Not available") OR Message="Request for : *") | rex field=_raw "status for : (?<ORDERS>.*?)" | rex field=_raw "data=[A-Za-z0-9-]+\|(?P<ORDERS>[\w\.]+)" | rex field=_raw "\"unique\"\:\"(?P<UNIQUEID>[A-Z0-9]+)\"" | fields ORDERS UNIQUEID | eval not_available = if(searchmatch("Not available"), "yes", null()) | stats values(*) as * by ORDERS | where isnotnull(not_available) | fields - not_available  
Is it possible to hide these two options also from the setting in Splunk ?   
You are looking at the wrong tool in the box.  Do not use rex to extract fields from structure data like JSON which your event contains.  Instead, extract the JSON object then use tools like spath to... See more...
You are looking at the wrong tool in the box.  Do not use rex to extract fields from structure data like JSON which your event contains.  Instead, extract the JSON object then use tools like spath to extract data fields.    | rex "^[^{]+(?<message_body>.+})" | spath input=message_body | table *.alias *.responders{}.name   Your sample data will give alert.alias entity.alias params.alert.alias params.entity.alias alert.responders{}.name entity.responders{}.name params.alert.responders{}.name params.entity.responders{}.name FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777, FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777, FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777, FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777, Monitoring_Admin Monitoring_Admin Monitoring_Admin Monitoring_Admin Additional pointers: The sample JSON contains 4 different leaf nodes all named alias.  There is no inherent logic to say they are all the same. The sample JSON contains 4 different arrays that all contain leaf nodes that are all named name.  There is no inherent logic to say they are all the same. What this means is that you need to ask your developer which node you need data from. Lastly, this JSON has a deep structure.  If you are only interested in select few nodes, you can also use a JSON function if your server is 8.2 or later.  For example,   | rex "^[^{]+(?<message_body>.+})" | eval alias = json_extract(message_body, "alert.alias"), name = json_extract(message_body, "alert.responders{}.name") | table alias name   The output will be alias name FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777, Monitoring_Admin Here is an emulation of your sample data.  Play with it and compare with real data   | makeresults | eval _raw = "[36mINFO[0m[2024-11-13T13:37:23.9114215-05:00] Message body: {\"actionType\":\"custom\",\"customerId\":\"3a1f4387-b87b-4a3a-a568-cc372a86d8e4\",\"ownerDomain\":\"integration\",\"ownerId\":\"8b500163-8476-4b0e-9ef7-2cfdaa272adf\",\"discardScriptResponse\":true,\"sendCallbackToStreamHub\":false,\"requestId\":\"18dcdb1b-14d6-4b10-ad62-3f73acaaef2a\",\"action\":\"Close\",\"productSource\":\"Opsgenie\",\"customerDomain\":\"siteone\",\"integrationName\":\"Opsgenie Edge Connector\",\"integrationId\":\"8b500163-8476-4b0e-9ef7-2cfdaa272adf\",\"customerTransitioningOrConsolidated\":false,\"source\":{\"name\":\"\",\"type\":\"system\"},\"type\":\"oec\",\"receivedAt\":1731523037863,\"ownerId\":\"8b500163-8476-4b0e-9ef7-2cfdaa272adf\",\"params\":{\"type\":\"oec\",\"alertId\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"customerId\":\"3a1f4387-b87b-4a3a-a568-cc372a86d8e4\",\"action\":\"Close\",\"integrationId\":\"8b500163-8476-4b0e-9ef7-2cfdaa272adf\",\"integrationName\":\"Opsgenie Edge Connector\",\"integrationType\":\"OEC\",\"customerDomain\":\"siteone\",\"alertDetails\":{\"Raw\":\"\",\"Results Link\":\"https://hostname:8000/app/search/search?q=%7Cloadjob%20scheduler__td26605__search__RMD5e461b39d4ff19795_at_1731522600_38116%20%7C%20head%204%20%7C%20tail%201&earliest=0&latest=now\",\"SuppressClosed\":\"True\",\"TeamsDescription\":\"True\"},\"alertAlias\":\"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,\",\"receivedAt\":1731523037863,\"customerConsolidated\":false,\"customerTransitioningOrConsolidated\":false,\"productSource\":\"Opsgenie\",\"source\":{\"name\":\"\",\"type\":\"system\"},\"alert\":{\"alertId\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"id\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"type\":\"alert\",\"message\":\"[Splunk] Load Balancer Member Status\",\"tags\":[],\"tinyId\":\"14585\",\"entity\":\"\",\"alias\":\"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,\",\"createdAt\":1731522737697,\"updatedAt\":1731523038582000000,\"username\":\"System\",\"responders\":[{\"id\":\"f8c9079d-c7bb-4e58-ac83-359cb217a3b5\",\"type\":\"team\",\"name\":\"Monitoring_Admin\"}],\"teams\":[\"f8c9079d-c7bb-4e58-ac83-359cb217a3b5\"],\"actions\":[],\"priority\":\"P3\",\"oldPriority\":\"P3\",\"source\":\"Splunk\"},\"entity\":{\"alertId\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"id\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"type\":\"alert\",\"message\":\"[Splunk] Load Balancer Member Status\",\"tags\":[],\"tinyId\":\"14585\",\"entity\":\"\",\"alias\":\"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,\",\"createdAt\":1731522737697,\"updatedAt\":1731523038582000000,\"username\":\"System\",\"responders\":[{\"id\":\"f8c9079d-c7bb-4e58-ac83-359cb217a3b5\",\"type\":\"team\",\"name\":\"Monitoring_Admin\"}],\"teams\":[\"f8c9079d-c7bb-4e58-ac83-359cb217a3b5\"],\"actions\":[],\"priority\":\"P3\",\"oldPriority\":\"P3\",\"source\":\"Splunk\"},\"mappedActionDto\":{\"mappedAction\":\"postActionToOEC\",\"extraField\":\"\"},\"ownerId\":\"8b500163-8476-4b0e-9ef7-2cfdaa272adf\"},\"integrationType\":\"OEC\",\"alert\":{\"alertId\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"id\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"type\":\"alert\",\"message\":\"[Splunk] Load Balancer Member Status\",\"tags\":[],\"tinyId\":\"14585\",\"entity\":\"\",\"alias\":\"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,\",\"createdAt\":1731522737697,\"updatedAt\":1731523038582000000,\"username\":\"System\",\"responders\":[{\"id\":\"f8c9079d-c7bb-4e58-ac83-359cb217a3b5\",\"type\":\"team\",\"name\":\"Monitoring_Admin\"}],\"teams\":[\"f8c9079d-c7bb-4e58-ac83-359cb217a3b5\"],\"actions\":[],\"priority\":\"P3\",\"oldPriority\":\"P3\",\"source\":\"Splunk\"},\"customerConsolidated\":false,\"customerId\":\"3a1f4387-b87b-4a3a-a568-cc372a86d8e4\",\"action\":\"Close\",\"mappedActionDto\":{\"mappedAction\":\"postActionToOEC\",\"extraField\":\"\"},\"alertId\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"alertAlias\":\"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,\",\"alertDetails\":{\"Raw\":\"\",\"Results Link\":\"https://hostname:8000/app/search/search?q=%7Cloadjob%20scheduler__td26605__search__RMD5e461b39d4ff19795_at_1731522600_38116%20%7C%20head%204%20%7C%20tail%201&earliest=0&latest=now\",\"SuppressClosed\":\"True\",\"TeamsDescription\":\"True\"},\"entity\":{\"alertId\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"id\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"type\":\"alert\",\"message\":\"[Splunk] Load Balancer Member Status\",\"tags\":[],\"tinyId\":\"14585\",\"entity\":\"\",\"alias\":\"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,\",\"createdAt\":1731522737697,\"updatedAt\":1731523038582000000,\"username\":\"System\",\"responders\":[{\"id\":\"f8c9079d-c7bb-4e58-ac83-359cb217a3b5\",\"type\":\"team\",\"name\":\"Monitoring_Admin\"}],\"teams\":[\"f8c9079d-c7bb-4e58-ac83-359cb217a3b5\"],\"actions\":[],\"priority\":\"P3\",\"oldPriority\":\"P3\",\"source\":\"Splunk\"}} [36mmessageId[0m=7546739e-2bab-414d-94b5-b0f205208932" ``` data emulation above ```  
@ITWhisperer , I want to make one table where we have date on one column and counts on other column
Sparklines are numeric, so will only show numbers You could use drilldown to open a panel to show the errors, but as for a tooltip type hover, you'd probably have to implement that yourself in Javas... See more...
Sparklines are numeric, so will only show numbers You could use drilldown to open a panel to show the errors, but as for a tooltip type hover, you'd probably have to implement that yourself in Javascript.
Is it possible to dispaly any dynamic values when we hover over sparkline in splunk dashboard. In my case, the sparkline shows the success count. So in case of failure is it possible to dispaly all t... See more...
Is it possible to dispaly any dynamic values when we hover over sparkline in splunk dashboard. In my case, the sparkline shows the success count. So in case of failure is it possible to dispaly all the hourly error message while hovering over the graph.
Your earliest= statement is wrong, it should be earliest=-1y@y You have an extra @ sign (-1@y@y)
If you want to use different policies for different correlation searches, You should add some filtering criteria in your second Notable Event Aggregation Policy (NEAP). For example you can use search... See more...
If you want to use different policies for different correlation searches, You should add some filtering criteria in your second Notable Event Aggregation Policy (NEAP). For example you can use search_name (or source) matches correlation_search2 in include the events if section of Filtering Criteria and Instructions tab in your second NEAP.
Hi everyone, I’m working with Splunk IT Service Intelligence (ITSI) and want to automate the creation of maintenance windows using a scheduled search in SPL. Ideally, I’d like to use the rest comman... See more...
Hi everyone, I’m working with Splunk IT Service Intelligence (ITSI) and want to automate the creation of maintenance windows using a scheduled search in SPL. Ideally, I’d like to use the rest command within SPL to define a maintenance window, assign specific entities and services to it, and have it run on a schedule. Is it possible to set up maintenance windows with entities and services directly from SPL? If anyone has sample SPL code or guidance on setting up automated maintenance windows, it would be very helpful! Thanks in advance!