All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We have 100 hosts and for all these hosts we want to append a keyword to the host name. for example, hostnames are TEST1, TEST2 and TEST3 and we want to add a keyword called APP, so the final host's ... See more...
We have 100 hosts and for all these hosts we want to append a keyword to the host name. for example, hostnames are TEST1, TEST2 and TEST3 and we want to add a keyword called APP, so the final host's name will be like APPTEST1, APPTEST2 and APPTEST3. Can we do this at UF level? Note- we don't want to do this based on source and source type at HF level because of the default source and source types.
I need to create an alert when all the below queues are at 100% for respective indexer.  For this I am using "DMC Alert - Saturated Event-Processing Queues" inbuilt alert but need to tweak it a littl... See more...
I need to create an alert when all the below queues are at 100% for respective indexer.  For this I am using "DMC Alert - Saturated Event-Processing Queues" inbuilt alert but need to tweak it a little bit to alert when all the 4 queues " aggQueue.*"  "indexQueue.0*"  "parsingQueue.*" and "typingQueue.0" are at 100% for that host. Query -  | rest splunk_server_group=dmc_group_indexer /services/server/introspection/queues | search title=tcpin_queue* OR title=parsingQueue* OR title=aggQueue* OR title=typingQueue* OR title=indexQueue* | eval fifteen_min_fill_perc = round(value_cntr3_size_bytes_lookback / max_size_bytes * 100,2) | fields title fifteen_min_fill_perc splunk_server | where fifteen_min_fill_perc > 99 | rename splunk_server as Instance, title AS "Queue name", fifteen_min_fill_perc AS "Average queue fill percentage (last 15min)"   Output - Queue name Average queue fill percentage (last 15min) Instance aggQueue.0 99.98 x aggQueue.1 100.00 x aggQueue.2 99.99 x indexQueue.0 100.00 x indexQueue.1 99.98 x indexQueue.2 99.97 x parsingQueue.0 100.00 x parsingQueue.1 99.82 x parsingQueue.2 99.98 x typingQueue.0 99.96 x typingQueue.1 99.99 x typingQueue.2 99.96 x aggQueue.0 100.00 y aggQueue.1 100.00 y aggQueue.2 100.00 y indexQueue.0 100.00 y indexQueue.1 100.00 y indexQueue.2 100.00 y parsingQueue.0 100.00 y parsingQueue.1 100.00 y  
Good day, i am using search query to correlate one field belongs and related jobs for that field i am using below query using transaction but i am trying to get unique value for one field but val... See more...
Good day, i am using search query to correlate one field belongs and related jobs for that field i am using below query using transaction but i am trying to get unique value for one field but values are missing for other fields also. correct my query  as my output expecting is in the table name of the BOX_NAME with one unque value and respective JOB_NAME under BOX_NAME   index=indexname sourcetype=sourcetypename | eval Actualstarttime=strftime(strptime(NEXT_START,"%Y/%m/%d %H:%M:%S"),"%H:%M") | eval Job_start_by=strftime(strptime(LAST_START,"%Y/%m/%d %H:%M:%S"),"%H:%M") | transaction BOX_NAME | table BOX_NAME,JOB_NAME,JOB_GROUP,REGION,TIMEZONE,STATUS,Currenttime,STATUS_TIME,LAST_START,LAST_END,NEXT_START,DAYS_OF_WEEK,EXCLUDE_CALENDAR,RUNTIME,Actualstarttime,Job_start_by,START_SLA,AVG_RUN_TIME
Hi Friends. I'm using Splunk cloud 9.0. I have installed "Splunk add-on for Microsoft Cloud services" add-on in Heavy forwarder. I want to get data from Azure Event hub.  I have created Azure app... See more...
Hi Friends. I'm using Splunk cloud 9.0. I have installed "Splunk add-on for Microsoft Cloud services" add-on in Heavy forwarder. I want to get data from Azure Event hub.  I have created Azure app account. I have configured input details. but I'm not getting data to splunk. I'm getting below error message:  2022-12-26 10:26:32,874 level=WARNING pid=12124 tid=Thread-2 logger=azure.eventhub._eventprocessor.event_processor pos=event_processor.py:_load_balancing:286 | EventProcessor instance '35e1711c-18e6-480f-a203-ee6ec4070fc2' of eventhub 'eh-spyglass-metrics-aks-whsdapedge-eastus2-stg' consumer group 'splunk'. An error occurred while load-balancing and claiming ownership. The exception is AuthenticationError("Management authentication failed. Status code: 401, Description: 'Attempted to perform an unauthorized operation.'\nManagement authentication failed. Status code: 401, Description: 'Attempted to perform an unauthorized operation.'"). Retrying after 11.293821480572925 seconds   I have already raised ticket to add my heavy forwarder server IP to add in the whitelist of Azure event hub. Could you please assist on this. How to receive data from Azure event hub to splunk? Thanks in advance.  
Hi, I need genterate list of data by giving max and min range. But I can't find a command (function) doing that. I will set max = 50 and min = 10 for following example. I think there's two way to... See more...
Hi, I need genterate list of data by giving max and min range. But I can't find a command (function) doing that. I will set max = 50 and min = 10 for following example. I think there's two way to do it by giving different argument. 1. Set max = 50 ,  min = 10,  output amount(length) of data = 7. Then I will recevice the output like: [10,   16.66,   23.32,   29.98,   36.64,   43.3,   50] In this case, I won't need set an interval. I only need give how many outcome I want to receive. 2. Set max = 50 ,  min = 10,  interval = 8.1. Then I will recevice the output like: [10,   18.1,   26.2,   34.3,   42.4,   50.5] The last(max data) can received 50 or 50.5, all works for me. In this case, I won't need set how many data I want to receive. I only need give the interval. Both way is aim to receive a list of data. Personally, I prefer No.1 solution, it is closer my need. By the way, I hope the output data can be a list or mutivalue.
We are facing metric gap several times. we need to collect for specific time.
I am downloaded and installed Splunk enterprise at home without procuring a license. Is my below understanding correct?   I will be able to index 500MB data daily. As long as I stay under tha... See more...
I am downloaded and installed Splunk enterprise at home without procuring a license. Is my below understanding correct?   I will be able to index 500MB data daily. As long as I stay under that limit, I should be able to use splunk forever.
Hi Experts, Im unable to find modify the pie chart colors using dashboard studio.  I have tried to add field colors under options in dashboard studio. Unable to edit for specific visualisation in ... See more...
Hi Experts, Im unable to find modify the pie chart colors using dashboard studio.  I have tried to add field colors under options in dashboard studio. Unable to edit for specific visualisation in the source code field.   Have a field called "supp_type", I was looking for pie chart to be green for value "current", Amber for "previous" and red for "old".  When I include the charting.fieldcolors it doesn't accept  or doesn't allow to save the panel code. Can you help me in adding that custom colors into dashboard studio.   Query: ------------- index=lab host=hmclab | spath path=hmc_info{} output=LIST | mvexpand LIST | spath input=LIST | where category == "hmc" | search hmc_version=V* OR hmc_version=unknown | dedup hmc_name | eval supp_type=case(match(hmc_version,"^V10.*|^V9R2.*"), "current", match(hmc_version, "^V9R1.*"), "previous", match(hmc_version, "^V8.*|^V7.*"), "old") | chart count by supp_type useother=false Source Code from dashboard studio: ------------------------------------------------------ { "type": "viz.pie", "dataSources": { "primary": "ds_RxEsq1cK" }, "title": "HMC Versions", "options": { "chart.showPercent": true, "backgroundColor": "transparent", "charting.fieldColors": {"current":0x008000, "previous":0xffff00, "old":0xff0000} }, "context": {}, "showProgressBar": false, "showLastUpdated": false }
How can I configure a CRON expression such that an alert was sent each 2hours in a day, and every day in weeks. ?? Many thanks !!!
For fun to learn golang I made utility code to execute the query from a text file. It will use credentials or an auth token. Results are simply written to a file as unmodified JSON.  it’s not an S... See more...
For fun to learn golang I made utility code to execute the query from a text file. It will use credentials or an auth token. Results are simply written to a file as unmodified JSON.  it’s not an SDK  but serviceable code   https://github.com/georgestarcher/querysplunk 
Hi, I need to call the result value as a filter. like this table below, the second value on column RecipientDomain will call on search as filter. become this query: index=sec_office365_dlp sourcety... See more...
Hi, I need to call the result value as a filter. like this table below, the second value on column RecipientDomain will call on search as filter. become this query: index=sec_office365_dlp sourcetype=sec_office365_dlp RecipientDomain=@yahoo.com | stats count by xxx     Help please..  
i have been using this query but couldn't be able to remove null rows, please help me index=Window_wash | rex field=_raw "TIME.TAKEN.FOR.(?<Vendor>\w+)" | rex field=_raw (?<Time_MS>\d+).ms | timec... See more...
i have been using this query but couldn't be able to remove null rows, please help me index=Window_wash | rex field=_raw "TIME.TAKEN.FOR.(?<Vendor>\w+)" | rex field=_raw (?<Time_MS>\d+).ms | timechart span=1m max(Time_MS) as Time_MS | outlier Time_MS    
Hi, I have a Splunk event "Application -> start of the log". When I try to search for this log using the exact text then I do not get any results. I see that the greater than symbol is causing the... See more...
Hi, I have a Splunk event "Application -> start of the log". When I try to search for this log using the exact text then I do not get any results. I see that the greater than symbol is causing the problem. When I split my search command as "Application -" AND "start of the log" then splunk was able to find the event. So how do I escape the greater than symbol in splunk search command? I tried &gt; which didn't work. (I also want to use this search text in "transaction startswith=" command so I cannot use `AND` condition)
Hi, I collected the cisco deviceslog with "Cisco Networks Add-on for Splunk Enterprise". And install  "Cisco Networks App for Splunk Enterprise" on my search heads. Now I want to know which user at w... See more...
Hi, I collected the cisco deviceslog with "Cisco Networks Add-on for Splunk Enterprise". And install  "Cisco Networks App for Splunk Enterprise" on my search heads. Now I want to know which user at what time execute what commands. In the App, Audit,  "Configuration change transactions"  shows me the time and user but for the "cmd" (command) just shows "!exec: enable" . It just shows the main command that enables user to enter other commands. Is it something wrong with my configs in Splunk or it's just about log level on Cisco devices? Tanks in advance
tried a lot of things but no joy - recommendations would be helpful: 9v agent upgrades automation in silent more not workingD msiexec.exe /i splunkforwarder-9.0.2-17e00c557dc1-x64-release.msi DEPLO... See more...
tried a lot of things but no joy - recommendations would be helpful: 9v agent upgrades automation in silent more not workingD msiexec.exe /i splunkforwarder-9.0.2-17e00c557dc1-x64-release.msi DEPLOYMENT_SERVER="10.X.X.67:8089" LAUNCHSPLUNK=1 SPLUNKUSERNAME=Admin SPLUNKPASSWORD=S@v3MY@55!!  AGREETOLICENSE=Yes /quiet   This appears to be an upgrade of Splunk. --------------------------------------------------------------------------------) Splunk has detected an older version of Splunk installed on this machine. To finish upgrading to the new version, Splunk's installer will automatically update and alter your current configuration files. Deprecated configuration files will be renamed with a .deprecated extension. You can choose to preview the changes that will be made to your configuration files before proceeding with the migration and upgrade: If you want to migrate and upgrade without previewing the changes that will be made to your existing configuration files, choose 'y'. If you want to see what changes will be made before you proceed with the upgrade, choose 'n'. Perform migration and upgrade without previewing configuration changes? [y/n]
Hi All, We are working in Splunk Cloud environment, I want to deploy custom the TIME_PREFIX configuration for one of the log source, we have tested the configuration in Splunk stand alone box, it wo... See more...
Hi All, We are working in Splunk Cloud environment, I want to deploy custom the TIME_PREFIX configuration for one of the log source, we have tested the configuration in Splunk stand alone box, it works fine, we have deployed the same configuration on Splunk Cloud indexers and IDM node, but it's not working. Below is my props configuration. TIME_PREFIX = "time":\s*" MAX_TIMESTAMP_LOOKAHEAD = 128  sample value: , "time": "2022-12-24T02:36:55.9183013Z", My question is do I need to deploy this configuration on search heads as well.   Thanks, Bhaskar    
Hi, If anyone can help it'll be greatly appreciated and ill pass on the karma. I keep getting this error when trying to add a search pair in the GUI: {"status": "ERROR", "msg": "Encountered the fol... See more...
Hi, If anyone can help it'll be greatly appreciated and ill pass on the karma. I keep getting this error when trying to add a search pair in the GUI: {"status": "ERROR", "msg": "Encountered the following error while trying to save: Peer with server name 192.168.118.128 conflicts with this server's name."}   When I add the search peer in the CLI it doesn't appear in the Search Peer section in the GUI either. Any help is gladly appreci
After I perform a search and click the "Format" Icon above the search results, there is an option for "Wrap Results". I check this and it does an attempt at wrapping results, but I still have to scro... See more...
After I perform a search and click the "Format" Icon above the search results, there is an option for "Wrap Results". I check this and it does an attempt at wrapping results, but I still have to scroll incredibly far to the right to see each result. It seems to just pick some arbitrary length and wraps it there. I should note, in the events there are plenty of spaces that could be used as clean wrap points but the system just seems to not use them. Is there some sort of config setting that controls wrap behaviour that I could tweak? Here is the event: {"cf_app_id":"uuid","cf_app_name":"app-name","deployment":"cf","event_type":"LogMessage","info_splunk_index":"splunk-index","ip":"ipaddr","message_type":"OUT","msg":"2022-12-22 19:11:30.242 DEBUG [app-name,02c11142eee3be456dc30ddb1b234d5f,f20222ba46461ea9] 28 --- [nio-8080-exec-1] classname : {\"data\":{\"fields\":[{\"__typename\":\"name\",\"field\":\"value\",\"field2\":\"value2\",\"field3\":\"value 3\",\"field4\":\"value4\",\"field5\":\"value5\",\"field6\":\"value6\",\"field7\":\"value7\",\"field8\":null,\"field9\":\"value9\",\"field10\":null,\"field11\":111059.0,\"field12\":111059.0,\"field13\":null,\"field14\":\"value14\",\"field15\":\"2018-10-01\",\"field16\":null,\"field17\":false,\"field18\":{\"field19\":\"value19\",\"fieldl20\":\"value20\",\"field21\":2.6,\"field22\":\"2031-10-31\",\"field23\":\"2017-11-06\"},\"field24\":{\"field25\":\"\",\"field26\":\"\"},\"field27\":{\"field28\":{\"field29\":0.0,\"field30\":0.0,\"field31\":240.63,\"field32\":\"2022-12-31\",\"field33\":0.0,\"field34\":\"9999-10-31\"}},\"field35\":[{\"field36\":{\"field37\":\"value37\"}},{\"field38\":{\"field39\":\"value39\"}}],\"field40\":{\"__typename\":\"value40\",\"field41\":\"value41\",\"field42\":\"value 42\",\"field43\":111059.0,\"field44\":\"2031-04-01\",\"field45\":65204.67,\"field46\":null,\"field47\":\"value47\",\"field48\":\"value48\",\"field49\":null,\"field50\":\"value50\",\"field51\":null,\"field52\":null}},{\"__typename\":\"value53\",\"field54\":\"value54\",\"field55\":\"value55\",\"field56\":\"value56\",\"field57\":\"value57\",\"field58\":\"value58\",\"field59\":\"9\",\"field60\":\"value60\",\"field61\":null,\"field62\":\"value62\",\"field63\":null,\"field64\":88841.0,\"field65\":38841.0,\"field66\":null,\"field67\":\"value67\",\"field68\":\"2018-10-01\",\"field69\":null,\"field70\":false,\"field71\":{\"field72\":\"value72\",\"field73\":\"value73\",\"field74\":2.6,\"field75\":\"2031-10-31\",\"field76\":\"2017-11-06\"},\"field77\":{\"field78\":\"\",\"field79\":\"\"},\"field80\":{\"field81\":{\"field82\":0.0,\"field83\":0.0,\"field84\":84.16,\"field85\":\"2022-12-31\",\"field86\":0.0,\"field87\":\"9999-10-31\"}},\"field88\":[{\"field89\":{\"field90\":\"value90\"}},{\"field91\":{\"field92\":\"value92\"}}],\"field93\":null},{\"__typename\":\"value94\",\"field95\":\"value95\",\"field96\":\"value96\",\"field97\":\"value97\",\"field98\":\"value98\",\"field99\":\"value99\",\"field100\":\"1\",\"field101\":\"value101\",\"field102\":null,\"field103\":\"value103\",\"field104\":\"359\",\"field105\":88025.0,\"field106\":79316.87,\"field107\":\"309\",\"field108\":\"value108\",\"field109\":\"2018-10-01\",\"field110\":\"2048-09-30\",\"field111\":false,\"field112\":{\"field113\":\"value113\",\"field114\":\"value114\",\"field115\":2.35,\"field116\":\"2031-10-31\",\"field117\":\"2017-11-06\"},\"field118\":{\"field119\":\"\",\"field120\":\"\"},\"field121\":{\"field122\":{\"field123\":341.58,\"field124\":0.0,\"field125\":155.33,\"field126\":\"2022-12-31\",\"field127\":186.25,\"field128\":\"2022-12-31\"}},\"field129\":[{\"field130\":{\"field131\":\"value131\"}},{\"field132\":{\"field133\":\"value133\"}}],\"field134\":null}]}}","origin":"rep","source_instance":"0","source_type":"APP/PROC/WEB","timestamp":1671732690243306564} Here is a screenshot: wrap toggle is enabled wrap not working as you can see the horizontal scrolbar
I like to create alert for agent failure. The Alert has to be triggered when any of the splunk-Otel agent is failed to run in any of the host for particular time grain. Also need to create dashboards... See more...
I like to create alert for agent failure. The Alert has to be triggered when any of the splunk-Otel agent is failed to run in any of the host for particular time grain. Also need to create dashboards shows list of running and not running host name and agent version.
I am using the Python SDK to add the allow_skew setting to savedsearches.  See the generalised code snippet below:    import splunklib.client as client splunk_svc = client.connect(host="localhost... See more...
I am using the Python SDK to add the allow_skew setting to savedsearches.  See the generalised code snippet below:    import splunklib.client as client splunk_svc = client.connect(host="localhost", port=8089, username="admin", password="******") savedsearch = splunk_svc.saved_searches["alert-splnk-test_email_v1"] new_skew = "5m" kwargs = {"allow_skew": new_skew} savedsearch.update(**kwargs).refresh()   This code works and adds 'allow_skew = 5m' to the specific savedsearch stanzas in {app/local OR system/local} / savedsearches.conf / [alert-splnk-test_email_v1] The code can also be extended to more/all savedsearches on the platform. It also replicates the changes in a SH Cluster, as expected. I want to have a reliable way to remove/erase the allow_skew setting from specific savedsearches, preferably using the Python SDK. The setting needs to be removed from the stanza, so that the allow_skew setting from system / local / savedsearches.conf / [default] is picked up. The only other ways I could think about are:  Using the class splunklib.client.Stanza(service, path, **kwargs) somehow.. Any directions on how to use it?  Recreate the savedsearch and not add allow_skew, but that would mean a lot of work for a bunch of savedearches.  Any help is appreciated.