All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I normally use requests when handling http or rest calls with python, as I find it really useful Requests has methods for deserialising json as well and options to configure proxies as well you mig... See more...
I normally use requests when handling http or rest calls with python, as I find it really useful Requests has methods for deserialising json as well and options to configure proxies as well you might find this helpful? https://stackoverflow.com/questions/8287628/proxies-with-python-requests-module you can then handle your own exceptions based on the http response - the useless facts api is handy for testing if you want to avoid prod if you need to handle authentication on the proxy that may be more complex e.g. import requests def get_content(base, route, proxies): url = f'{base}/{route}' headers = { 'Content-Type': 'application/json' } response = requests.get(url, headers=headers, proxies=proxies) return response.json() def main(): base = "https://uselessfacts.jsph.pl" route = "api/v2/facts/random" http_proxy = "http://10.10.1.10:3128" https_proxy = "https://10.10.1.11:1080" ftp_proxy = "ftp://10.10.1.10:3128" proxies = { "http": http_proxy, "https": https_proxy, "ftp": ftp_proxy } response = get_content(base, route, proxies) print(response) if __name__ == "__main__": main()   
The screenshot alone will not be sufficient.  As this is constructed with _internal, can you post the source code?
@bowesmana I have got it to work with MV but the colouring element has to be at the beginning and hidden, which means every field has to become a MV with some sort of colour indicator prepended. It i... See more...
@bowesmana I have got it to work with MV but the colouring element has to be at the beginning and hidden, which means every field has to become a MV with some sort of colour indicator prepended. It is a little messy and until it is clear what is actually going on with the search, I didn't want to spend too much time on it. 
Only addtotals is appended to the end of rows.  Have you tried addcoltotals as @bowesmana recommended?  If you did, you would have gotten this: Now, floating-point arithmetics is always tricky w... See more...
Only addtotals is appended to the end of rows.  Have you tried addcoltotals as @bowesmana recommended?  If you did, you would have gotten this: Now, floating-point arithmetics is always tricky with digital computers.  But artifacts usually shouldn't show in such small jobs.  Then, again, Splunk is known to be quirky in many computations. (addcoltotals does not have this problem.) You wanted the total (1129.36) to be rounded up to 1130.  This can be done after addcoltotals.  For example, | makeresults format=csv data="Student, Score a,153.8 b,154.8 c,131.7 d,115.4 e,103.2 f,95.4 g,95.4 h,93.2 i,93.2 j,93.26" | table Student, Score | addcoltotals | eval Score = if(isnull(Student), floor(Score) + 1, Score)  
Hello, Unfortunatly, this solution doesn't solve anything.
club the success and Error under a single field say "results" or "action" and then you should be able to get the 2 values in a pie chart.
Im sure I am missing a fundamental point  
Hello, we have PDF mail delivery scheduled every evening however sendemail may fail (mail server error for instance without error in Splunk search.log). How to reschedule PDF delivery for yesterd... See more...
Hello, we have PDF mail delivery scheduled every evening however sendemail may fail (mail server error for instance without error in Splunk search.log). How to reschedule PDF delivery for yesterday's data WITHOUT modifying user's dashboard? Thanks.
so, i assume, you have two splunk environments.. one running 6.6 Splunk and another running on 8.2 Splunk..  is that correct..
Hi @1ueshkil ... we may need more details from you..  Pls check this: https://www.splunk.com/en_us/blog/security/using-mitre-att-ck-in-splunk-security-essentials.html Do you know on Linux and Pal... See more...
Hi @1ueshkil ... we may need more details from you..  Pls check this: https://www.splunk.com/en_us/blog/security/using-mitre-att-ck-in-splunk-security-essentials.html Do you know on Linux and Palo Alto, which use-case you are exactly looking for.. 
Did you enter the search in Studio's visual editor or did you insert them directly into source?  Is there some mistyping/miscopying? There is nothing wrong with rename.  You can try out this test da... See more...
Did you enter the search in Studio's visual editor or did you insert them directly into source?  Is there some mistyping/miscopying? There is nothing wrong with rename.  You can try out this test dashboard { "dataSources": { "ds_FAnSoMB1": { "type": "ds.search", "options": { "query": "| makeresults\n| eval _raw = \"{\\\"data\\\":[{\\\"name\\\":\\\"B\\\"},{\\\"name\\\":\\\"D\\\"},{\\\"name\\\":\\\"b\\\"},{\\\"name\\\":\\\"d\\\"}]}\"\n| spath\n| fields - _*\n| rename data{}.name as name", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "name": "Table search" } }, "visualizations": { "viz_qVGDM9DA": { "type": "splunk.table", "options": { "count": 100, "dataOverlayMode": "none", "drilldown": "none", "showRowNumbers": false, "showInternalFields": false }, "dataSources": { "primary": "ds_FAnSoMB1" } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" } }, "layout": { "type": "grid", "options": { "width": 1440, "height": 960 }, "structure": [ { "item": "viz_qVGDM9DA", "type": "block", "position": { "x": 0, "y": 0, "w": 1440, "h": 250 } } ], "globalInputs": [ "input_global_trp" ] }, "title": "DS dashboard and rename command", "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "description": "https://community.splunk.com/t5/Splunk-Search/Rename-works-in-search-but-not-in-Dashboard-Studio/m-p/671192#M230030" } The search used is simply | makeresults | eval _raw = "{\"data\":[{\"name\":\"B\"},{\"name\":\"D\"},{\"name\":\"b\"},{\"name\":\"d\"}]}" | spath | fields - _* | rename data{}.name as name The dashboard gives the exact same output.
trying to get a script working  import urllib.request, json try: with urllib.request.urlopen("XXXXXXXXX.json") as url: data = json.loads(url.read().decode()) print("data received") for item in ... See more...
trying to get a script working  import urllib.request, json try: with urllib.request.urlopen("XXXXXXXXX.json") as url: data = json.loads(url.read().decode()) print("data received") for item in data: print(item) except urllib.error.URLError as e: print(f"Request failed with error: (e)") this works fine and fetches the data but  I need this to pass through proxy server when I try that it does not work.. any help is apprecaited.  
Please below. [root@prdpl2splunk02 bin]# ./splunk btool outputs list [rfs] batchSizeThresholdKB = 131072 batchTimeout = 30 compression = zstd compressionLevel = 3 dropEventsOnUploadError = false for... See more...
Please below. [root@prdpl2splunk02 bin]# ./splunk btool outputs list [rfs] batchSizeThresholdKB = 131072 batchTimeout = 30 compression = zstd compressionLevel = 3 dropEventsOnUploadError = false format = json format.json.index_time_fields = true format.ndjson.index_time_fields = true partitionBy = legacy [syslog] maxEventSize = 1024 priority = <13> type = udp [tcpout] ackTimeoutOnShutdown = 30 autoLBFrequency = 30 autoLBVolume = 0 blockOnCloning = true blockWarnThreshold = 100 cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-SHA256 compressed = false connectionTTL = 0 connectionTimeout = 20 defaultGroup = default-autolb-group disabled = false dropClonedEventsOnQueueFull = 5 dropEventsOnQueueFull = -1 ecdhCurves = prime256v1, secp384r1, secp521r1 enableOldS2SProtocol = false forceTimebasedAutoLB = false forwardedindex.0.whitelist = .* forwardedindex.1.blacklist = _.* forwardedindex.2.whitelist = (_audit|_introspection|_telemetry) forwardedindex.filter.disable = false heartbeatFrequency = 30 indexAndForward = 0 maxConnectionsPerIndexer = 2 maxFailuresPerInterval = 2 maxQueueSize = 500KB negotiateProtocolLevel = 0 readTimeout = 300 secsInFailureInterval = 1 sendCookedData = true sslCommonNameToCheck = *.align.splunkcloud.com sslQuietShutdown = false sslVerifyServerCert = true sslVersions = tls1.2 tcpSendBufSz = 0 useACK = false useClientSSLCompression = true writeTimeout = 300 [tcpout-server://inputs1.stack.splunkcloud.com:9997] [tcpout-server://inputs15.stack.splunkcloud.com:9997] autoLBFrequency = 120 sslCommonNameToCheck = *.stack.splunkcloud.com sslVerifyServerCert = false sslVerifyServerName = false useClientSSLCompression = true [tcpout-server://inputs2.stack.splunkcloud.com:9997] [tcpout-server://inputs3.stack.splunkcloud.com:9997] [tcpout-server://inputs4.stack.splunkcloud.com:9997] [tcpout-server://inputs5.stack.splunkcloud.com:9997] [tcpout-server://inputs6.stack.splunkcloud.com:9997] [tcpout-server://inputs7.stack.splunkcloud.com:9997] [tcpout-server://inputs8.stack.splunkcloud.com:9997] [tcpout-server://inputs9.stack.splunkcloud.com:9997] [tcpout:default-autolb-group] disabled = false server = 54.85.90.105:9997, inputs2.stack.splunkcloud.com:9997, inputs3.stack.splunkcloud.com:9997,...... inputs15.stack.splunkcloud.com:9997 [tcpout:scs] compressed = true disabled = 1 server = stack.forwarders.scs.splunk.com:9997 UF Output: [rfs] batchSizeThresholdKB = 131072 batchTimeout = 30 compression = zstd compressionLevel = 3 dropEventsOnUploadError = false format = json format.json.index_time_fields = true format.ndjson.index_time_fields = true partitionBy = legacy [syslog] maxEventSize = 1024 priority = <13> type = udp [tcpout] ackTimeoutOnShutdown = 30 autoLBFrequency = 30 autoLBVolume = 0 blockOnCloning = true blockWarnThreshold = 100 cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-SHA256 compressed = false connectionTTL = 0 connectionTimeout = 20 defaultGroup = default-autolb-group disabled = false dropClonedEventsOnQueueFull = 5 dropEventsOnQueueFull = -1 ecdhCurves = prime256v1, secp384r1, secp521r1 enableOldS2SProtocol = false forceTimebasedAutoLB = false forwardedindex.0.whitelist = .* forwardedindex.1.blacklist = _.* forwardedindex.2.whitelist = (_audit|_introspection|_internal|_telemetry|_configtracker) forwardedindex.filter.disable = false heartbeatFrequency = 30 indexAndForward = false maxConnectionsPerIndexer = 2 maxFailuresPerInterval = 2 maxQueueSize = auto readTimeout = 300 secsInFailureInterval = 1 sendCookedData = true sslQuietShutdown = false sslVersions = tls1.2 tcpSendBufSz = 0 useACK = false useClientSSLCompression = true writeTimeout = 300 [tcpout-server://prdpl2splunk02.domainame.com:9997] [tcpout:default-autolb-group] server = prdpl2splunk02.domainame.com:9997
Could you help me to make only 2 slices with only success and failures as i will not be needing services column, it is only working on one service and just providing name for the need of bargraph, we... See more...
Could you help me to make only 2 slices with only success and failures as i will not be needing services column, it is only working on one service and just providing name for the need of bargraph, we wont be needing service name, it is not useful , i just need success and failure in piechart  
Hi @madhav_dholakia, yes, you can refresh the dashboard panel every 10 seconds, but, really your data changes every 10 seconds and you have all the data variations and they are mandatory to be displ... See more...
Hi @madhav_dholakia, yes, you can refresh the dashboard panel every 10 seconds, but, really your data changes every 10 seconds and you have all the data variations and they are mandatory to be displayed every 10 seconds? Then, does your search give results in less than 10 seconds? I hint to re-analyze your needs to define a more real requisite to implement. Otherwise, continue to use real time searches giving much more resources (CPUs on Indexers and Search Heads) to your infrastructure: remember that a search takes a CPU ob SH and on IDX and release it when finishes, in your case never, so if your have 20 user that use the dashboard you have to add at least 20 CPUs to SH and to IDXs. Ciao. Giuseppe
Hi thanks again for your attention I have reproduced this in my lab (using _internal) and I have noticed differences in the chaining of searching and how it is chained, despite the fact that when yo... See more...
Hi thanks again for your attention I have reproduced this in my lab (using _internal) and I have noticed differences in the chaining of searching and how it is chained, despite the fact that when you click on the link it effectively concatenates and displays. I'll structure the findings into what I think is a more coherent way and post the results up later today.  
Do you mean to say that these two together index=my_index timechart span=30m count(eventClass) by severity returns results but when  they are respectively the main search and changed search, nothin... See more...
Do you mean to say that these two together index=my_index timechart span=30m count(eventClass) by severity returns results but when  they are respectively the main search and changed search, nothing is shown?  Posting actual output will not help in this case.  Does chart 1 use the same main search (without chained search)?
When I developed an add on based on Splunk's cluster, encountering some problems: 1、I created an index named test from the indexes cluster of Splunk,Also through "./splunk edit cluster-config -mode ... See more...
When I developed an add on based on Splunk's cluster, encountering some problems: 1、I created an index named test from the indexes cluster of Splunk,Also through "./splunk edit cluster-config -mode searchhead -master_uri <Indexer Cluster Master URI>"    command linked search head cluster nodes with index clusters.   I want to write data to the index of this test through the Splunk API and obtain the written data from other search header nodes, but I found that it is not working. Is this related to my previous creation of indexes in the search header node. If it's relevant, how can I remove the index from the search header cluster? 2、Is kvsore data synchronized in the search head cluster. What should I do if I want to clean up the environment and delete a KVStore in the search header cluster? 3、What is the data communication mechanism between search head clusters, and do I want to achieve some data synchronization on add on between multiple search heads? Is there any good method? BR!
yes, but then it won't be a near realtime which is required. any other option to recreate this dashboard so that we can get data refreshed every 10 (or 15) seconds? Thank you.
Hi @madhav_dholakia, you cannot schedule a cron every 10 seconds, but every 10 minutes using something like this: */10 * * * * Ciao. Giuseppe