All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

club the success and Error under a single field say "results" or "action" and then you should be able to get the 2 values in a pie chart.
Im sure I am missing a fundamental point  
Hello, we have PDF mail delivery scheduled every evening however sendemail may fail (mail server error for instance without error in Splunk search.log). How to reschedule PDF delivery for yesterd... See more...
Hello, we have PDF mail delivery scheduled every evening however sendemail may fail (mail server error for instance without error in Splunk search.log). How to reschedule PDF delivery for yesterday's data WITHOUT modifying user's dashboard? Thanks.
so, i assume, you have two splunk environments.. one running 6.6 Splunk and another running on 8.2 Splunk..  is that correct..
Hi @1ueshkil ... we may need more details from you..  Pls check this: https://www.splunk.com/en_us/blog/security/using-mitre-att-ck-in-splunk-security-essentials.html Do you know on Linux and Pal... See more...
Hi @1ueshkil ... we may need more details from you..  Pls check this: https://www.splunk.com/en_us/blog/security/using-mitre-att-ck-in-splunk-security-essentials.html Do you know on Linux and Palo Alto, which use-case you are exactly looking for.. 
Did you enter the search in Studio's visual editor or did you insert them directly into source?  Is there some mistyping/miscopying? There is nothing wrong with rename.  You can try out this test da... See more...
Did you enter the search in Studio's visual editor or did you insert them directly into source?  Is there some mistyping/miscopying? There is nothing wrong with rename.  You can try out this test dashboard { "dataSources": { "ds_FAnSoMB1": { "type": "ds.search", "options": { "query": "| makeresults\n| eval _raw = \"{\\\"data\\\":[{\\\"name\\\":\\\"B\\\"},{\\\"name\\\":\\\"D\\\"},{\\\"name\\\":\\\"b\\\"},{\\\"name\\\":\\\"d\\\"}]}\"\n| spath\n| fields - _*\n| rename data{}.name as name", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Table search" } }, "visualizations": { "viz_qVGDM9DA": { "type": "splunk.table", "options": { "count": 100, "dataOverlayMode": "none", "drilldown": "none", "showRowNumbers": false, "showInternalFields": false }, "dataSources": { "primary": "ds_FAnSoMB1" } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } }, "layout": { "type": "grid", "options": { "width": 1440, "height": 960 }, "structure": [ { "item": "viz_qVGDM9DA", "type": "block", "position": { "x": 0, "y": 0, "w": 1440, "h": 250 } } ], "globalInputs": [ "input_global_trp" ] }, "title": "DS dashboard and rename command", "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "description": "https://community.splunk.com/t5/Splunk-Search/Rename-works-in-search-but-not-in-Dashboard-Studio/m-p/671192#M230030" } The search used is simply | makeresults | eval _raw = "{\"data\":[{\"name\":\"B\"},{\"name\":\"D\"},{\"name\":\"b\"},{\"name\":\"d\"}]}" | spath | fields - _* | rename data{}.name as name The dashboard gives the exact same output.
trying to get a script working  import urllib.request, json try: with urllib.request.urlopen("XXXXXXXXX.json") as url: data = json.loads(url.read().decode()) print("data received") for item in ... See more...
trying to get a script working  import urllib.request, json try: with urllib.request.urlopen("XXXXXXXXX.json") as url: data = json.loads(url.read().decode()) print("data received") for item in data: print(item) except urllib.error.URLError as e: print(f"Request failed with error: (e)") this works fine and fetches the data but  I need this to pass through proxy server when I try that it does not work.. any help is apprecaited.  
Please below. [root@prdpl2splunk02 bin]# ./splunk btool outputs list [rfs] batchSizeThresholdKB = 131072 batchTimeout = 30 compression = zstd compressionLevel = 3 dropEventsOnUploadError = false for... See more...
Please below. [root@prdpl2splunk02 bin]# ./splunk btool outputs list [rfs] batchSizeThresholdKB = 131072 batchTimeout = 30 compression = zstd compressionLevel = 3 dropEventsOnUploadError = false format = json format.json.index_time_fields = true format.ndjson.index_time_fields = true partitionBy = legacy [syslog] maxEventSize = 1024 priority = <13> type = udp [tcpout] ackTimeoutOnShutdown = 30 autoLBFrequency = 30 autoLBVolume = 0 blockOnCloning = true blockWarnThreshold = 100 cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-SHA256 compressed = false connectionTTL = 0 connectionTimeout = 20 defaultGroup = default-autolb-group disabled = false dropClonedEventsOnQueueFull = 5 dropEventsOnQueueFull = -1 ecdhCurves = prime256v1, secp384r1, secp521r1 enableOldS2SProtocol = false forceTimebasedAutoLB = false forwardedindex.0.whitelist = .* forwardedindex.1.blacklist = _.* forwardedindex.2.whitelist = (_audit|_introspection|_telemetry) forwardedindex.filter.disable = false heartbeatFrequency = 30 indexAndForward = 0 maxConnectionsPerIndexer = 2 maxFailuresPerInterval = 2 maxQueueSize = 500KB negotiateProtocolLevel = 0 readTimeout = 300 secsInFailureInterval = 1 sendCookedData = true sslCommonNameToCheck = *.align.splunkcloud.com sslQuietShutdown = false sslVerifyServerCert = true sslVersions = tls1.2 tcpSendBufSz = 0 useACK = false useClientSSLCompression = true writeTimeout = 300 [tcpout-server://inputs1.stack.splunkcloud.com:9997] [tcpout-server://inputs15.stack.splunkcloud.com:9997] autoLBFrequency = 120 sslCommonNameToCheck = *.stack.splunkcloud.com sslVerifyServerCert = false sslVerifyServerName = false useClientSSLCompression = true [tcpout-server://inputs2.stack.splunkcloud.com:9997] [tcpout-server://inputs3.stack.splunkcloud.com:9997] [tcpout-server://inputs4.stack.splunkcloud.com:9997] [tcpout-server://inputs5.stack.splunkcloud.com:9997] [tcpout-server://inputs6.stack.splunkcloud.com:9997] [tcpout-server://inputs7.stack.splunkcloud.com:9997] [tcpout-server://inputs8.stack.splunkcloud.com:9997] [tcpout-server://inputs9.stack.splunkcloud.com:9997] [tcpout:default-autolb-group] disabled = false server = 54.85.90.105:9997, inputs2.stack.splunkcloud.com:9997, inputs3.stack.splunkcloud.com:9997,...... inputs15.stack.splunkcloud.com:9997 [tcpout:scs] compressed = true disabled = 1 server = stack.forwarders.scs.splunk.com:9997 UF Output: [rfs] batchSizeThresholdKB = 131072 batchTimeout = 30 compression = zstd compressionLevel = 3 dropEventsOnUploadError = false format = json format.json.index_time_fields = true format.ndjson.index_time_fields = true partitionBy = legacy [syslog] maxEventSize = 1024 priority = <13> type = udp [tcpout] ackTimeoutOnShutdown = 30 autoLBFrequency = 30 autoLBVolume = 0 blockOnCloning = true blockWarnThreshold = 100 cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-SHA256 compressed = false connectionTTL = 0 connectionTimeout = 20 defaultGroup = default-autolb-group disabled = false dropClonedEventsOnQueueFull = 5 dropEventsOnQueueFull = -1 ecdhCurves = prime256v1, secp384r1, secp521r1 enableOldS2SProtocol = false forceTimebasedAutoLB = false forwardedindex.0.whitelist = .* forwardedindex.1.blacklist = _.* forwardedindex.2.whitelist = (_audit|_introspection|_internal|_telemetry|_configtracker) forwardedindex.filter.disable = false heartbeatFrequency = 30 indexAndForward = false maxConnectionsPerIndexer = 2 maxFailuresPerInterval = 2 maxQueueSize = auto readTimeout = 300 secsInFailureInterval = 1 sendCookedData = true sslQuietShutdown = false sslVersions = tls1.2 tcpSendBufSz = 0 useACK = false useClientSSLCompression = true writeTimeout = 300 [tcpout-server://prdpl2splunk02.domainame.com:9997] [tcpout:default-autolb-group] server = prdpl2splunk02.domainame.com:9997
Could you help me to make only 2 slices with only success and failures as i will not be needing services column, it is only working on one service and just providing name for the need of bargraph, we... See more...
Could you help me to make only 2 slices with only success and failures as i will not be needing services column, it is only working on one service and just providing name for the need of bargraph, we wont be needing service name, it is not useful , i just need success and failure in piechart  
Hi @madhav_dholakia, yes, you can refresh the dashboard panel every 10 seconds, but, really your data changes every 10 seconds and you have all the data variations and they are mandatory to be displ... See more...
Hi @madhav_dholakia, yes, you can refresh the dashboard panel every 10 seconds, but, really your data changes every 10 seconds and you have all the data variations and they are mandatory to be displayed every 10 seconds? Then, does your search give results in less than 10 seconds? I hint to re-analyze your needs to define a more real requisite to implement. Otherwise, continue to use real time searches giving much more resources (CPUs on Indexers and Search Heads) to your infrastructure: remember that a search takes a CPU ob SH and on IDX and release it when finishes, in your case never, so if your have 20 user that use the dashboard you have to add at least 20 CPUs to SH and to IDXs. Ciao. Giuseppe
Hi thanks again for your attention I have reproduced this in my lab (using _internal) and I have noticed differences in the chaining of searching and how it is chained, despite the fact that when yo... See more...
Hi thanks again for your attention I have reproduced this in my lab (using _internal) and I have noticed differences in the chaining of searching and how it is chained, despite the fact that when you click on the link it effectively concatenates and displays. I'll structure the findings into what I think is a more coherent way and post the results up later today.  
Do you mean to say that these two together index=my_index timechart span=30m count(eventClass) by severity returns results but when  they are respectively the main search and changed search, nothin... See more...
Do you mean to say that these two together index=my_index timechart span=30m count(eventClass) by severity returns results but when  they are respectively the main search and changed search, nothing is shown?  Posting actual output will not help in this case.  Does chart 1 use the same main search (without chained search)?
When I developed an add on based on Splunk's cluster, encountering some problems: 1、I created an index named test from the indexes cluster of Splunk,Also through "./splunk edit cluster-config -mode ... See more...
When I developed an add on based on Splunk's cluster, encountering some problems: 1、I created an index named test from the indexes cluster of Splunk,Also through "./splunk edit cluster-config -mode searchhead -master_uri <Indexer Cluster Master URI>"    command linked search head cluster nodes with index clusters.   I want to write data to the index of this test through the Splunk API and obtain the written data from other search header nodes, but I found that it is not working. Is this related to my previous creation of indexes in the search header node. If it's relevant, how can I remove the index from the search header cluster? 2、Is kvsore data synchronized in the search head cluster. What should I do if I want to clean up the environment and delete a KVStore in the search header cluster? 3、What is the data communication mechanism between search head clusters, and do I want to achieve some data synchronization on add on between multiple search heads? Is there any good method? BR!
yes, but then it won't be a near realtime which is required. any other option to recreate this dashboard so that we can get data refreshed every 10 (or 15) seconds? Thank you.
Hi @madhav_dholakia, you cannot schedule a cron every 10 seconds, but every 10 minutes using something like this: */10 * * * * Ciao. Giuseppe
Hi @gcusello - I am not sure if I can schedule Splunk Report to run every 10 seconds? I added this cron expression in  Report Schedule but it says "Invalid Cron" 0/10 0 0 ? * * *
Hi, I have data like these entries link          id                     parent     name ----          ---                     --------     --------- link1      311                                ... See more...
Hi, I have data like these entries link          id                     parent     name ----          ---                     --------     --------- link1      311                                   email.eml link1      312                  311         abc.rar link2      315                  312         xyz.exe   that I want to combine into this   link                       id                              parent              name ----                       ---                              --------               --------- link1, link2      315, 312, 311        312, 311         xyz.exe, abc.rar, email.eml   Combining condition is based on id and parent. 311 is the parent, 312 is child of 311, 315 is child of 312 ('grandchild' of 311) Thank you in advance for your help!
thanks @gcusello - I will give it a try and will update result/query here. Thank you.
Hi I'm upgrading and migrating my Splunk enterprise 8.1.1 running on windows server 2012 R2. Anyone have a recommended path for this? Upgrade first, or migrate first? Usually I would prefer to upgra... See more...
Hi I'm upgrading and migrating my Splunk enterprise 8.1.1 running on windows server 2012 R2. Anyone have a recommended path for this? Upgrade first, or migrate first? Usually I would prefer to upgrade first, but I see that 8.2 is not supported on windows server 2012 R2.  
Hi @madhav_dholakia, if you have a real time dashboard continously used by many users you kill your system. In this case, use a different approach: create a report containing the information to di... See more...
Hi @madhav_dholakia, if you have a real time dashboard continously used by many users you kill your system. In this case, use a different approach: create a report containing the information to display and then in the dashboard display the report using loadjob (https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/SearchReference/Loadjob). this is an old post but the solution is still valid: https://community.splunk.com/t5/Dashboards-Visualizations/What-can-we-use-to-replace-loadjob-based-dashboards-that-work/td-p/183897?_ga=2.241490241.231839313.1701675686-357223955.1700808457&_gl=1*f7uctl*_ga*MzU3MjIzOTU1LjE3MDA4MDg0NTc.*_ga_GS7YF8S63Y*MTcwMjAxOTY5OC42NC4xLjE3MDIwMTk4OTYuNTQuMC4w*_ga_5EPM2P39FV*MTcwMjAxOTY2NS42Ny4xLjE3MDIwMTk5MDUuNDMuMC4whttps://community.splunk.com/t5/Dashboards-Visualizations/Add-reports-to-dashboards/m-p/9392 Ciao. Giuseppe