All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Again - you're talking about a completely different thing. One thing is general IP-based restrictions - this you can do on a reverse-proxy or even directly on Splunk server itself using access rules... See more...
Again - you're talking about a completely different thing. One thing is general IP-based restrictions - this you can do on a reverse-proxy or even directly on Splunk server itself using access rules for ports. Another thing is restricting given roles or users to specific IP-s. Again - this could also be done if the proxy was acting as an SSO source for Splunk but that is as tricky as any other SSO and still you could easily "escape" this IP-restriction after initial login.
Hi @Ombessam  If you click on your input, then in the panel on the right click the Display dropdown and select "In Canvas"   You can then move it around inside a single tab. Please let me kno... See more...
Hi @Ombessam  If you click on your input, then in the panel on the right click the Display dropdown and select "In Canvas"   You can then move it around inside a single tab. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @ITWhisperer  I'm using dashboard studio. Here is my source code { "title": "ButterCup Game", "description": "", "inputs": { "input_global_trp": { "options": { ... See more...
Hi @ITWhisperer  I'm using dashboard studio. Here is my source code { "title": "ButterCup Game", "description": "", "inputs": { "input_global_trp": { "options": { "defaultValue": "0,", "token": "global_time" }, "title": "Global Time Range", "type": "input.timerange" }, "input_wAcCA79n": { "options": { "defaultValue": "*", "items": [ { "label": "All", "value": "*" }, { "label": "ACCESSORIES", "value": "ACCESSORIES" }, { "label": "ARCADE", "value": "ARCADE" }, { "label": "SHOOTER", "value": "SHOOTER" }, { "label": "SIMULATION", "value": "SIMULATION" }, { "label": "SPORTS", "value": "SPORTS" }, { "label": "STRATEGY", "value": "STRATEGY" }, { "label": "TEE", "value": "TEE" } ], "token": "dd_token" }, "title": "Game Categories", "type": "input.dropdown" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } } }, "visualizations": { "viz_t3jLGiwh": { "dataSources": { "primary": "ds_V5vxqc3K" }, "title": "Sales count by product", "type": "splunk.pie" } }, "dataSources": { "ds_V5vxqc3K": { "name": "CategorySales", "options": { "query": "index=web sourcetype=\"access_combined\" status=200 product_name=* categoryId=$dd_token$\n| stats count by product_name", "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } }, "type": "ds.search" } }, "layout": { "globalInputs": [ "input_global_trp", "input_wAcCA79n" ], "layoutDefinitions": { "layout_1": { "options": { "height": 960, "width": 1440 }, "structure": [ { "item": "viz_t3jLGiwh", "position": { "h": 400, "w": 1440, "x": 0, "y": 0 }, "type": "block" } ], "type": "grid" }, "layout_AfZAhYi7": { "structure": [], "type": "grid" } }, "options": {}, "tabs": { "items": [ { "label": "Inventory", "layoutId": "layout_1" }, { "label": "Sales", "layoutId": "layout_AfZAhYi7" } ] } } }
Are you using Classic or Studio? Please share your source code for your dashboard (in a codeblock </>)
Hello guys, I have a dashboard  with two tabs. I've added a dropdown input and I'm going to add more inputs. But I want to display input only for a specific tab. In my case, I want for example t... See more...
Hello guys, I have a dashboard  with two tabs. I've added a dropdown input and I'm going to add more inputs. But I want to display input only for a specific tab. In my case, I want for example the dropdown input to be displayed only when Inventory tab is active.  The dropdown input should disappear when I click Sales tab. Can anyone help me how to achieve this ? Thanks a lot  
Hi @ej87897  I have done this upgrade with the number of customers so I’m not sure if it’s a problem with 9.4.X itself but maybe a configuration somewhere which is causing the issue.  a few more th... See more...
Hi @ej87897  I have done this upgrade with the number of customers so I’m not sure if it’s a problem with 9.4.X itself but maybe a configuration somewhere which is causing the issue.  a few more things to check: if you do a search against your indexers against index=_ds* do you get any results? if you do the same search from your deployment server, do you get any results? Please let me know how you get on and we can try and work through the issue, but in the meantime you may wish to open a support case via splunk.com/support to get the ball rolling from that side.  regards Will
@Andre_  No Splunk has no controls based on network source. Only user to role mapping.  This is not doable in the Splunk server configuration.  But a common and effective way to restrict access t... See more...
@Andre_  No Splunk has no controls based on network source. Only user to role mapping.  This is not doable in the Splunk server configuration.  But a common and effective way to restrict access to Splunk roles based on source IP is to place Splunk behind a reverse proxy (e.g., Apache or NGINX) and configure the proxy to handle IP-based restrictions.   However, I haven’t experimented with this approach yet.    Define roles on the Splunk platform with capabilities - Splunk Documentation  About configuring role-based user access - Splunk Documentation  
@ej87897I recommend raising a support ticket to troubleshoot this issue.
Thanks Will
Hi @johnjohn  I know of 2 ways to achieve this, but there could be others. Enable incoming e-mail support for a list or library on Sharepoint - Check out https://support.microsoft.com/en-gb/office... See more...
Hi @johnjohn  I know of 2 ways to achieve this, but there could be others. Enable incoming e-mail support for a list or library on Sharepoint - Check out https://support.microsoft.com/en-gb/office/enable-incoming-e-mail-support-for-a-list-or-library-dcaf44a0-1d9b-451a-84c7-6c52e7db908e for more information on this.  You would then configure a scheduled search with an email alert action to send the CSV results to the email provided by Sharepoint and this would be added to the library. Use Microsoft Power Automate, as above you would use a scheduled search to send the CSV results. Create a Flow triggered by email: Use the "When a new email arrives (V3)" trigger from Office 365 Outlook connector (This requires a O365/Outlook.com email account). Add a condition to filter for emails with CSV attachments Configure the "Create file" action: Connect to your SharePoint site Select the destination library/folder Choose to save the attachment from the email Set dynamic content for the file name (keep original or create custom naming) Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @173022  Unfortunately the templating variables are only supported in email and HTTP request templates. For the health rule name only the following static characters are supported: The followin... See more...
Hi @173022  Unfortunately the templating variables are only supported in email and HTTP request templates. For the health rule name only the following static characters are supported: The following character types are allowed in a health rule name: Uppercase letters: A-Z Lowercase letters: a-z Numbers: 0-9 Symbols: ~ ` ! @ # $ % ^ * ( ) _ - + = { [ } ] | : ; ' . ? / > < You may want to look use the AppDynamics REST API to programmatically create health rules with dynamic names based on your infrastructure if you have a number of rules to manage with custom names, but you could end up with a larger quantity of rules. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will    
As @PickleRick says, it should work on the data you gave us, but one assumption it's using is that all your example rows show that the PHTRAN field always referring to the ROOT transaction QZ81 (for ... See more...
As @PickleRick says, it should work on the data you gave us, but one assumption it's using is that all your example rows show that the PHTRAN field always referring to the ROOT transaction QZ81 (for USER=GPDCFC26), despite your example showing 2 sub-levels of task, i.e. USER=GPDCFC26 calls APP7, with TRANNUM 70322 and APP7 is the parent for all its own subtasks through APP3 and APP5. IFF this is the case, then it will probably handle all levels you have, but as he also states, you can only program recursion to a fixed level, so if you only have the parent/child transaction numbers, then it's more difficult.
Not directly. You could do something like that with SAML probably if your identity provider could allow/deny login based on IP-criteria. But be aware that even then it would only work during the init... See more...
Not directly. You could do something like that with SAML probably if your identity provider could allow/deny login based on IP-criteria. But be aware that even then it would only work during the initial login. If the user switched to another network while having a logged-in session, he would still be logged in with his role.
Ok. So you have only edges of your tree as I suspected. @bowesmana 's solution will probably work (haven't checked but that's what I'd expect from experience ;-)) but be aware that with such data you... See more...
Ok. So you have only edges of your tree as I suspected. @bowesmana 's solution will probably work (haven't checked but that's what I'd expect from experience ;-)) but be aware that with such data you're limited to fixed level of "nesting" in your data. Since SPL cannot do recursion, you cannot "unpack" any arbitrary level of sub-sub-sub...tasks by a general solotion. You can write a search for two levels, you could extend it to three or four but it'll always be a fixed level.
Predefined Templating Variables I tried predefined templating variables - ${event.node.name} and custom insertables in healthrule didn't worked as expected.  
If I understand you correctly, this example should give you what you want. The first makeresults section is crafting your data, so you actually need from the eval statement following the data setup. ... See more...
If I understand you correctly, this example should give you what you want. The first makeresults section is crafting your data, so you actually need from the eval statement following the data setup. | makeresults format=csv data="START,STOP,USER,JOBNAME,TRAN,TRANNUM,PHAPPLID,PHTRAN,PHTRANNO,USRCPUT_MICROSEC 2:10:30 p.m.,2:10:30 p.m., ,APP3,CSMI,43853,APP7,QZ81,70322,76 2:10:30 p.m.,2:10:30 p.m., ,APP3,CSMI,43850,APP7,QZ81,70322,64 2:10:30 p.m.,2:10:30 p.m., ,APP3,CSMI,43848,APP7,QZ81,70322,64 2:10:30 p.m.,2:10:30 p.m., ,APP3,CSMI,43846,APP7,QZ81,70322,74 2:10:30 p.m.,2:10:30 p.m., ,APP3,CSMI,43845,APP7,QZ81,70322,68 2:10:30 p.m.,2:10:30 p.m., ,APP3,CSMI,43844,APP7,QZ81,70322,71 2:10:30 p.m.,2:10:30 p.m., ,APP3,CSMI,43857,APP7,QZ81,70322,65 2:10:30 p.m.,2:10:30 p.m., ,APP3,CSMI,43856,APP7,QZ81,70322,72 2:10:30 p.m.,2:10:30 p.m., ,APP5,CSMI,20634,APP7,QZ81,70322,8860 2:10:30 p.m.,2:10:30 p.m., ,APP7,QZ81,70322,APP3,QZ81,43836,16043 2:10:30 p.m.,2:10:30 p.m.,GPDCFC26,APP3,QZ81,43836, , ,0,897 2:10:17 p.m.,2:10:17 p.m., ,APP3,CSMI,41839,APP5,QZ61,15551,51 2:10:17 p.m.,2:10:17 p.m., ,APP3,CSMI,41838,APP5,QZ61,15551,64 2:10:17 p.m.,2:10:17 p.m., ,APP3,CSMI,41837,APP5,QZ61,15551,79 2:10:17 p.m.,2:10:17 p.m., ,APP5,QZ61,15551,APP3,QZ61,41835,5232 2:10:17 p.m.,2:10:17 p.m.,GOTLIS12,APP3,QZ61,41835, , ,0,778" ``` In the task case, PHTRAN is empty, so this will copy the TRAN to PHTRAN giving you correlation ``` | eval PHTRAN=coalesce(PHTRAN,TRAN) ``` This counts all occurrences of the PHTRAN and joins the USER field into the child events ``` | eventstats count as subTasks values(USER) as USER by PHTRAN ``` Now count the executions of each USER and evaluate the timings ``` | stats count(eval(PHTRANNO=0)) as Executions sum(USRCPUT_MICROSEC) as tot_USRCPUT_MICROSEC avg(USRCPUT_MICROSEC) as avg_USRCPUT_MICROSEC max(subTasks) as subTasks by USER ``` And adjust the subtask count, as we treated the main task as a subtask and then calculate the average subtask count ``` | eval subTasks=subTasks-1, avg_subTasks=subTasks/Executions  
Hi Experts, Is there any way i can add "Hostname,Node Name,Tier name" in healthrule names  ? Tested with some placeholders didn't worked. Appreciate your suggestions. Eg :  Thanks, Raj... See more...
Hi Experts, Is there any way i can add "Hostname,Node Name,Tier name" in healthrule names  ? Tested with some placeholders didn't worked. Appreciate your suggestions. Eg :  Thanks, Raj AppDynamics 
I added a fake field with a fake value in the query, was added by selecting a drop down option as additional step
Can't thank you enough! The Support ticket was on my todo list all day and kept getting back-burnered. Appreciate the information! Looking forward to rm'ing it in  the morning
Hi everybody thanks for the replies.  Sorry I didn't provide enough detail in the original post, I'm trying to workout the data structure as we have lots of data, I've pull some data out that shows 2... See more...
Hi everybody thanks for the replies.  Sorry I didn't provide enough detail in the original post, I'm trying to workout the data structure as we have lots of data, I've pull some data out that shows 2 examples. The first example starts with USER=GPDCFC26, all the rows above that are sub-tasks or sub-sub-tasks.  And the second example is USER=GOTLIS12 and the 4 rows above are again the sub and sub-sub tasks. What I want is by user (which is at the task level only) get a couple of bits of information, the average and max total CPU time (USRCPUT_MICROSEC), the average number of sub tasks for each task (so 10 for GPDCFC26 and 4 for GOTLIS12), and a count of the number of time each task has executed (1 for both in this case). START STOP USER JOBNAME TRAN TRANNUM PHAPPLID PHTRAN PHTRANNO USRCPUT_MICROSEC 2:10:30 p.m. 2:10:30 p.m.   APP3 CSMI 43853 APP7 QZ81 70322 76 2:10:30 p.m. 2:10:30 p.m.   APP3 CSMI 43850 APP7 QZ81 70322 64 2:10:30 p.m. 2:10:30 p.m.   APP3 CSMI 43848 APP7 QZ81 70322 64 2:10:30 p.m. 2:10:30 p.m.   APP3 CSMI 43846 APP7 QZ81 70322 74 2:10:30 p.m. 2:10:30 p.m.   APP3 CSMI 43845 APP7 QZ81 70322 68 2:10:30 p.m. 2:10:30 p.m.   APP3 CSMI 43844 APP7 QZ81 70322 71 2:10:30 p.m. 2:10:30 p.m.   APP3 CSMI 43857 APP7 QZ81 70322 65 2:10:30 p.m. 2:10:30 p.m.   APP3 CSMI 43856 APP7 QZ81 70322 72 2:10:30 p.m. 2:10:30 p.m.   APP5 CSMI 20634 APP7 QZ81 70322 8860 2:10:30 p.m. 2:10:30 p.m.   APP7 QZ81 70322 APP3 QZ81 43836 16043 2:10:30 p.m. 2:10:30 p.m. GPDCFC26 APP3 QZ81 43836     0 897 2:10:17 p.m. 2:10:17 p.m.   APP3 CSMI 41839 APP5 QZ61 15551 51 2:10:17 p.m. 2:10:17 p.m.   APP3 CSMI 41838 APP5 QZ61 15551 64 2:10:17 p.m. 2:10:17 p.m.   APP3 CSMI 41837 APP5 QZ61 15551 79 2:10:17 p.m. 2:10:17 p.m.   APP5 QZ61 15551 APP3 QZ61 41835 5232 2:10:17 p.m. 2:10:17 p.m. GOTLIS12 APP3 QZ61 41835     0 778