All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Have this cluster work earlier or are you just putting it up from scratch? Did you see any connections attemps in master's internal logs? Are there any other messages in peer's internal logs? Can y... See more...
Have this cluster work earlier or are you just putting it up from scratch? Did you see any connections attemps in master's internal logs? Are there any other messages in peer's internal logs? Can you use e.g. curl or nc to connect masters management port from peer?
hi @isoutamo  yes, this is on indexer cluster - on all 3 nodes ID1,ID2,ID3 yes, master node is up and running, I can see no error logs from splunkd.log from master
Hi Was this on indexer? Have you checked that master node is up and running? If it's up and running, there should be some more hints on its internal logs. Just check those to get more hints. r. Ismo
hi @phanichintha , is this issue resolved ? if yes, could you post you solution, even I am facing the same issue https://community.splunk.com/t5/Deployment-Architecture/ERROR-CMSlave-Waiting-for-the-... See more...
hi @phanichintha , is this issue resolved ? if yes, could you post you solution, even I am facing the same issue https://community.splunk.com/t5/Deployment-Architecture/ERROR-CMSlave-Waiting-for-the-cluster-manager-to-come-up/m-p/704380#M28826
Anyone don't have same problem here ? 
hi guys After adding [clustering] stanza and [replication_port://9887] in indexer cluster,  getting the below error ERROR  - Waiting for the cluster manager to come up... (retrying every second),bu... See more...
hi guys After adding [clustering] stanza and [replication_port://9887] in indexer cluster,  getting the below error ERROR  - Waiting for the cluster manager to come up... (retrying every second),but  service is running, but it stuck at this point - Waiting for web server at http://<ip>:8000 to be available. later, got this warning, WARNING: web interface does not seem to be available! how to I fix this issue ? 8000,9887 ports are already open, I've passed same pass4SymmKey for all 3 servers in indexer cluster tags: @any  
Our add-on requires dateparser and within dateparser the regex library is required. I added the libraries to my addon (built with Add-on Builder 4.3) however, it fails in Splunk 9.2 with: File "/opt... See more...
Our add-on requires dateparser and within dateparser the regex library is required. I added the libraries to my addon (built with Add-on Builder 4.3) however, it fails in Splunk 9.2 with: File "/opt/splunk/etc/apps/.../regex/_regex_core.py", line 21, in <module> import regex._regex as _regex ModuleNotFoundError: No module named 'regex._regex' If I try on Splunk 9.3 it works fine.  I know Python version changed from 3.7 to 3.9 on Splunk 9.3 but the regex version 2024.4.16 seems to be good for Python 3.7 I will appreciate any insight on how to solve this issue.
Several problems with the illustrated search.  Most important one is use of join.  This is rarely the solution to any problem in Splunk.  Then, there is the problem of bad quotation marks. Even with... See more...
Several problems with the illustrated search.  Most important one is use of join.  This is rarely the solution to any problem in Splunk.  Then, there is the problem of bad quotation marks. Even without this, like @ITWhisperer says, illustrating useful data input is the best way to enable volunteers to help you.  Your illustrated search syntax is so mixed up I cannot even tell whether the two searches are using the same index. Without any assumption about which data source(s) are used, you can get desired result using append and stats, assuming that my speculation about your syntax reflects the correct filter. index=source "status for : *" "Not available" | rex field=_raw "status for : (?<ORDERS>.*?)" | fields ORDERS | dedup ORDERS | eval status = "Not available" | append [search Message="Request for : *" | rex field=_raw "data=[A-Za-z0-9-]+\|(?P<ORDERS>[\w\.]+)" | rex field=_raw "\"unique\"\:\"(?P<UNIQUEID>[A-Z0-9]+)\"" | fields ORDERS UNIQUEID | dedup ORDERS UNIQUEID] | stats values(*) as * by ORDERS | where status == "Not available" | fields - status Note in search command, AND between terms is implied and rarely need to be spelled out. Now, if the two searches use the same index, it is perhaps more efficient to NOT use append. (Much less join.)  Instead, combine the two in one search. index=source (("status for : *" "Not available") OR Message="Request for : *") | rex field=_raw "status for : (?<ORDERS>.*?)" | rex field=_raw "data=[A-Za-z0-9-]+\|(?P<ORDERS>[\w\.]+)" | rex field=_raw "\"unique\"\:\"(?P<UNIQUEID>[A-Z0-9]+)\"" | fields ORDERS UNIQUEID | eval not_available = if(searchmatch("Not available"), "yes", null()) | stats values(*) as * by ORDERS | where isnotnull(not_available) | fields - not_available  
Is it possible to hide these two options also from the setting in Splunk ?   
You are looking at the wrong tool in the box.  Do not use rex to extract fields from structure data like JSON which your event contains.  Instead, extract the JSON object then use tools like spath to... See more...
You are looking at the wrong tool in the box.  Do not use rex to extract fields from structure data like JSON which your event contains.  Instead, extract the JSON object then use tools like spath to extract data fields.    | rex "^[^{]+(?<message_body>.+})" | spath input=message_body | table *.alias *.responders{}.name   Your sample data will give alert.alias entity.alias params.alert.alias params.entity.alias alert.responders{}.name entity.responders{}.name params.alert.responders{}.name params.entity.responders{}.name FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777, FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777, FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777, FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777, Monitoring_Admin Monitoring_Admin Monitoring_Admin Monitoring_Admin Additional pointers: The sample JSON contains 4 different leaf nodes all named alias.  There is no inherent logic to say they are all the same. The sample JSON contains 4 different arrays that all contain leaf nodes that are all named name.  There is no inherent logic to say they are all the same. What this means is that you need to ask your developer which node you need data from. Lastly, this JSON has a deep structure.  If you are only interested in select few nodes, you can also use a JSON function if your server is 8.2 or later.  For example,   | rex "^[^{]+(?<message_body>.+})" | eval alias = json_extract(message_body, "alert.alias"), name = json_extract(message_body, "alert.responders{}.name") | table alias name   The output will be alias name FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777, Monitoring_Admin Here is an emulation of your sample data.  Play with it and compare with real data   | makeresults | eval _raw = "[36mINFO[0m[2024-11-13T13:37:23.9114215-05:00] Message body: {\"actionType\":\"custom\",\"customerId\":\"3a1f4387-b87b-4a3a-a568-cc372a86d8e4\",\"ownerDomain\":\"integration\",\"ownerId\":\"8b500163-8476-4b0e-9ef7-2cfdaa272adf\",\"discardScriptResponse\":true,\"sendCallbackToStreamHub\":false,\"requestId\":\"18dcdb1b-14d6-4b10-ad62-3f73acaaef2a\",\"action\":\"Close\",\"productSource\":\"Opsgenie\",\"customerDomain\":\"siteone\",\"integrationName\":\"Opsgenie Edge Connector\",\"integrationId\":\"8b500163-8476-4b0e-9ef7-2cfdaa272adf\",\"customerTransitioningOrConsolidated\":false,\"source\":{\"name\":\"\",\"type\":\"system\"},\"type\":\"oec\",\"receivedAt\":1731523037863,\"ownerId\":\"8b500163-8476-4b0e-9ef7-2cfdaa272adf\",\"params\":{\"type\":\"oec\",\"alertId\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"customerId\":\"3a1f4387-b87b-4a3a-a568-cc372a86d8e4\",\"action\":\"Close\",\"integrationId\":\"8b500163-8476-4b0e-9ef7-2cfdaa272adf\",\"integrationName\":\"Opsgenie Edge Connector\",\"integrationType\":\"OEC\",\"customerDomain\":\"siteone\",\"alertDetails\":{\"Raw\":\"\",\"Results Link\":\"https://hostname:8000/app/search/search?q=%7Cloadjob%20scheduler__td26605__search__RMD5e461b39d4ff19795_at_1731522600_38116%20%7C%20head%204%20%7C%20tail%201&earliest=0&latest=now\",\"SuppressClosed\":\"True\",\"TeamsDescription\":\"True\"},\"alertAlias\":\"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,\",\"receivedAt\":1731523037863,\"customerConsolidated\":false,\"customerTransitioningOrConsolidated\":false,\"productSource\":\"Opsgenie\",\"source\":{\"name\":\"\",\"type\":\"system\"},\"alert\":{\"alertId\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"id\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"type\":\"alert\",\"message\":\"[Splunk] Load Balancer Member Status\",\"tags\":[],\"tinyId\":\"14585\",\"entity\":\"\",\"alias\":\"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,\",\"createdAt\":1731522737697,\"updatedAt\":1731523038582000000,\"username\":\"System\",\"responders\":[{\"id\":\"f8c9079d-c7bb-4e58-ac83-359cb217a3b5\",\"type\":\"team\",\"name\":\"Monitoring_Admin\"}],\"teams\":[\"f8c9079d-c7bb-4e58-ac83-359cb217a3b5\"],\"actions\":[],\"priority\":\"P3\",\"oldPriority\":\"P3\",\"source\":\"Splunk\"},\"entity\":{\"alertId\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"id\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"type\":\"alert\",\"message\":\"[Splunk] Load Balancer Member Status\",\"tags\":[],\"tinyId\":\"14585\",\"entity\":\"\",\"alias\":\"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,\",\"createdAt\":1731522737697,\"updatedAt\":1731523038582000000,\"username\":\"System\",\"responders\":[{\"id\":\"f8c9079d-c7bb-4e58-ac83-359cb217a3b5\",\"type\":\"team\",\"name\":\"Monitoring_Admin\"}],\"teams\":[\"f8c9079d-c7bb-4e58-ac83-359cb217a3b5\"],\"actions\":[],\"priority\":\"P3\",\"oldPriority\":\"P3\",\"source\":\"Splunk\"},\"mappedActionDto\":{\"mappedAction\":\"postActionToOEC\",\"extraField\":\"\"},\"ownerId\":\"8b500163-8476-4b0e-9ef7-2cfdaa272adf\"},\"integrationType\":\"OEC\",\"alert\":{\"alertId\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"id\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"type\":\"alert\",\"message\":\"[Splunk] Load Balancer Member Status\",\"tags\":[],\"tinyId\":\"14585\",\"entity\":\"\",\"alias\":\"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,\",\"createdAt\":1731522737697,\"updatedAt\":1731523038582000000,\"username\":\"System\",\"responders\":[{\"id\":\"f8c9079d-c7bb-4e58-ac83-359cb217a3b5\",\"type\":\"team\",\"name\":\"Monitoring_Admin\"}],\"teams\":[\"f8c9079d-c7bb-4e58-ac83-359cb217a3b5\"],\"actions\":[],\"priority\":\"P3\",\"oldPriority\":\"P3\",\"source\":\"Splunk\"},\"customerConsolidated\":false,\"customerId\":\"3a1f4387-b87b-4a3a-a568-cc372a86d8e4\",\"action\":\"Close\",\"mappedActionDto\":{\"mappedAction\":\"postActionToOEC\",\"extraField\":\"\"},\"alertId\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"alertAlias\":\"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,\",\"alertDetails\":{\"Raw\":\"\",\"Results Link\":\"https://hostname:8000/app/search/search?q=%7Cloadjob%20scheduler__td26605__search__RMD5e461b39d4ff19795_at_1731522600_38116%20%7C%20head%204%20%7C%20tail%201&earliest=0&latest=now\",\"SuppressClosed\":\"True\",\"TeamsDescription\":\"True\"},\"entity\":{\"alertId\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"id\":\"913a3db5-7e2a-44f4-a4ff-3002af480c8d-1731522737697\",\"type\":\"alert\",\"message\":\"[Splunk] Load Balancer Member Status\",\"tags\":[],\"tinyId\":\"14585\",\"entity\":\"\",\"alias\":\"FL-NS-VPX-INT-1|mobileapist?vnetapis003?8777,\",\"createdAt\":1731522737697,\"updatedAt\":1731523038582000000,\"username\":\"System\",\"responders\":[{\"id\":\"f8c9079d-c7bb-4e58-ac83-359cb217a3b5\",\"type\":\"team\",\"name\":\"Monitoring_Admin\"}],\"teams\":[\"f8c9079d-c7bb-4e58-ac83-359cb217a3b5\"],\"actions\":[],\"priority\":\"P3\",\"oldPriority\":\"P3\",\"source\":\"Splunk\"}} [36mmessageId[0m=7546739e-2bab-414d-94b5-b0f205208932" ``` data emulation above ```  
@ITWhisperer , I want to make one table where we have date on one column and counts on other column
Sparklines are numeric, so will only show numbers You could use drilldown to open a panel to show the errors, but as for a tooltip type hover, you'd probably have to implement that yourself in Javas... See more...
Sparklines are numeric, so will only show numbers You could use drilldown to open a panel to show the errors, but as for a tooltip type hover, you'd probably have to implement that yourself in Javascript.
Is it possible to dispaly any dynamic values when we hover over sparkline in splunk dashboard. In my case, the sparkline shows the success count. So in case of failure is it possible to dispaly all t... See more...
Is it possible to dispaly any dynamic values when we hover over sparkline in splunk dashboard. In my case, the sparkline shows the success count. So in case of failure is it possible to dispaly all the hourly error message while hovering over the graph.
Your earliest= statement is wrong, it should be earliest=-1y@y You have an extra @ sign (-1@y@y)
If you want to use different policies for different correlation searches, You should add some filtering criteria in your second Notable Event Aggregation Policy (NEAP). For example you can use search... See more...
If you want to use different policies for different correlation searches, You should add some filtering criteria in your second Notable Event Aggregation Policy (NEAP). For example you can use search_name (or source) matches correlation_search2 in include the events if section of Filtering Criteria and Instructions tab in your second NEAP.
Hi everyone, I’m working with Splunk IT Service Intelligence (ITSI) and want to automate the creation of maintenance windows using a scheduled search in SPL. Ideally, I’d like to use the rest comman... See more...
Hi everyone, I’m working with Splunk IT Service Intelligence (ITSI) and want to automate the creation of maintenance windows using a scheduled search in SPL. Ideally, I’d like to use the rest command within SPL to define a maintenance window, assign specific entities and services to it, and have it run on a schedule. Is it possible to set up maintenance windows with entities and services directly from SPL? If anyone has sample SPL code or guidance on setting up automated maintenance windows, it would be very helpful! Thanks in advance!
I am on Splunk 8.2.12. I am trying to get a distinct count of incidents that have happened in each month, year to date. I'd like to compare that to the year prior.  I feel like this should be pre... See more...
I am on Splunk 8.2.12. I am trying to get a distinct count of incidents that have happened in each month, year to date. I'd like to compare that to the year prior.  I feel like this should be pretty easy, but my results aren't showing the current year in comparison to the previous year. This shows the current year data (2024) (earliest=-1@y@y AND latest=now()) | eval date_month=strftime(_time, "%mon") | eval date_year = strftime(_time, "%Y") | timechart span=1mon dc(RMI_MastIncNumb) as "# of Incidents" When I add | timewrap 1year series=exact time_format=%Y it ends up just showing me 2023  
As @MuS said, you must as that your account team add rights to you to download it after you have bought it.
Hi shortly Nope.  There are quite many answers which this has already discussed earlier. Main point here is that bucket is manage by youngest event inside it. As there are several bucket which _ti... See more...
Hi shortly Nope.  There are quite many answers which this has already discussed earlier. Main point here is that bucket is manage by youngest event inside it. As there are several bucket which _time can differ heavily to each other _time:s in that bucket, you cannot get exactly 1 month time period in hot+warm+cold. It's always defined by combination of several parameters. You can found those from older answers or docs. r. Ismo
Hi if your company is Splunk Partner and your company fulfills some defined requirements, then there is possibility to get Splunk Cloud Sandbox environment for 12 months. I cannot recall those requi... See more...
Hi if your company is Splunk Partner and your company fulfills some defined requirements, then there is possibility to get Splunk Cloud Sandbox environment for 12 months. I cannot recall those requirement now, but you or your company's partner manager can check those and if those are fulfilled then order that sandbox to your use. r. Ismo