All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Splunk Addon for Citrix netscaler is contiguously logging the following ERRORs  in splunkd.log file. 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt... See more...
Splunk Addon for Citrix netscaler is contiguously logging the following ERRORs  in splunkd.log file. 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py" splunklib.binding.HTTPError: HTTP 404 Not Found -- citrix_netscaler_templates does not exist 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py" During handling of the above exception, another exception occurred: 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py" Traceback (most recent call last): 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py"   File "/opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/lib/solnlib/conf_manager.py", line 457, in get_conf 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py"     conf = self._confs[name] 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py"   File "/opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/lib/splunklib/client.py", line 1816, in __getitem__ 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py"     raise KeyError(key) 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py" KeyError: 'citrix_netscaler_templates' 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py" During handling of the above exception, another exception occurred: 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py" Traceback (most recent call last): 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py"   File "/opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/lib/solnlib/utils.py", line 153, in wrapper 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py"     return func(*args, **kwargs) 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py"   File "/opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/lib/solnlib/conf_manager.py", line 459, in get_conf 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py"     raise ConfManagerException(f"Config file: {name} does not exist.") 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py" solnlib.conf_manager.ConfManagerException: Config file: citrix_netscaler_templates does not exist. 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py" .   Any solution?
The result should look like the table given below.Need to find the matching product number within customers and the result should be the matching product number, if not null. Can you please suggest ... See more...
The result should look like the table given below.Need to find the matching product number within customers and the result should be the matching product number, if not null. Can you please suggest   Customer_name Product_number Result Customer1 P1 P1, P2 P2   P3   P4           Customer 2 P2 P2 P5       Customer 3 P3 Null P6          
We have distributed environment with 4 Splunk Indexers which are consuming high memory . It reaches to 100% and remains unreachable until we restart splunkd service. Once restarted, memory comes down... See more...
We have distributed environment with 4 Splunk Indexers which are consuming high memory . It reaches to 100% and remains unreachable until we restart splunkd service. Once restarted, memory comes down and the same process repeats on other indexers within a span of couple of hours. 64GB of Physical Memory is available on each indexer and saved searches/Scheduled searches are not consuming high memory. Unable to understand why there is spike in the memory usage.  In DMC It shows,Splunkd server is using high Physical Memory usage by process class. PID keeps increasing as below. Please suggest how do i find the root cause for this issue and how to fix it    
Hi, I'm looking for how to make conditional stats aggregation query according to a form input "With users" (value : Yes or No) I got a list of events per User When form input With users is equa... See more...
Hi, I'm looking for how to make conditional stats aggregation query according to a form input "With users" (value : Yes or No) I got a list of events per User When form input With users is equal "Yes", i'd like to present such a table User URI avgNb avgDuration A A 1 1 A B 2 5 A C 3 1 B A 5 9 B C 6 10 C A 4 11 C B 6 8   Query : | stats count as avgNb avg(DUR) as avgDuration by USR An when form input With users is equal "No", i'd like to present this one URI avgNb avgDuration A 3,33333333 7 B 4 6,5 C 4,5 5,5 Query : | stats count as avgNb avg(DUR) as avgDuration   How can I build my query according to this form input condition? Thanks
Hi Guys,   I am comparing the values from a csv with those returned in a json format on a splunk search.   At the moment the search works as i want it.  But i noticed that in some instances t... See more...
Hi Guys,   I am comparing the values from a csv with those returned in a json format on a splunk search.   At the moment the search works as i want it.  But i noticed that in some instances the results from the splunk search do not bring back all the entries because simply for that customer they do not exist.   In the csv i have all the entries that should exist and match, and if one doesnt match then return it as a result, but where i am struggling is getting the search to also output and say, hold on, this entry with this value is in the csv but its not in the search.   The entries which are not returned in the search are important to us because it means something isn't turned on so we need to go to that customer and rectify it.   The search atm looks like this  index=main sourcetype="my_stats" type="add-ons" | spath config{} | mvexpand config{} | spath input=config{} | lookup add-ons.csv "Configuration Item" as displayName OUTPUTNEW "Configuration Setting" as "default" |stats list(type) as type list(displayName) as item list(name) as value list(default) as default list(owner) as owner by company Thanks, Greg
Hi Guys. I have a distributed setup consisting of 1 search head, 1 deployment/license server, 1 indexer. And a whole bunch of universal forwarders. I am trying to filter out some of the data com... See more...
Hi Guys. I have a distributed setup consisting of 1 search head, 1 deployment/license server, 1 indexer. And a whole bunch of universal forwarders. I am trying to filter out some of the data coming in with transforms.conf:   [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = "name":"System availability" DEST_KEY = queue FORMAT = indexQueue     and a props.conf   [Zabbix-history] SHOULD_LINEMERGE = falde MAX_TIMESTAMP_LOOKAHEAD = 300 detect_trailing_nulls = auto TIME_PREFIX = \"clock\": KV_MODE = json AUTO_KV_JSON = true TRANSFORMS-set = setnull,setparsing     A log example that i would like to index, matching the regex in transforms.conf   {"host":{"host":"xxxxx","name":"xxxx"},"groups":["xxxx Prod","xxxx","Windows servers"],"item_tags":[{"tag":"SAP Basis","value":""},{"tag":"System availability","value":""},{"tag":"SID1","value":""},{"tag":"Product","value":"Web Server"},{"tag":"SID","value":"WSP"}],"itemid":900162,"name":"System availability","clock":1670486400,"count":13,"min":1,"avg":1,"max":1,"type":3}       Currently the props.conf and transforms.conf are on the indexer in the designated app. Its currently filtering out all the logs with the sourcetype Zabbix-history, and not indexing the "name":"System Availability" Does the props/transforms also need to be on the searchhead, or pushed to the universalforwarder with the deployment server?
Hello, in old dashboard versions I was able to  create multi line strings in table cells. However, in Dashboard Studio it doesn't work. Is there a workaround? example for dashboard studio: { ... See more...
Hello, in old dashboard versions I was able to  create multi line strings in table cells. However, in Dashboard Studio it doesn't work. Is there a workaround? example for dashboard studio: {     "dataSources": {         "ds_search_1": {             "type": "ds.search",             "options": {                 "query": "| makeresults | eval value=\"line1\r\nline2\r\nline3\" \r\n| table value",                 "queryParameters": {                     "earliest": "-24h@h",                     "latest": "now"                 }             }         }     },     "visualizations": {         "viz_table_1": {             "type": "splunk.table",             "options": {                 "count": 100,                 "dataOverlayMode": "none",                 "drilldown": "none",                 "percentagesRow": false,                 "rowNumbers": false,                 "totalsRow": false,                 "wrap": true             },             "dataSources": {                 "primary": "ds_search_1"             }         }     },     "inputs": {         "input_global_trp": {             "type": "input.timerange",             "options": {                 "token": "global_time",                 "defaultValue": "-24h@h,now"             },             "title": "Global Time Range"         }     },     "layout": {         "type": "grid",         "options": {},         "structure": [             {                 "item": "viz_table_1",                 "type": "block",                 "position": {                     "x": 0,                     "y": 0,                     "w": 1200,                     "h": 250                 }             }         ],         "globalInputs": [             "input_global_trp"         ]     },     "title": "dashboardstudio",     "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$global_time.latest$",                         "earliest": "$global_time.earliest$"                     }                 }             }         }     } }     examle for xml dashboard:     <dashboard version="1.1"> <label>delete_me_8</label> <row> <panel> <table> <search> <query>| makeresults | eval value="line1 line2 line3" | table value</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </dashboard>    
Hello Experts! This is questoin about queue type FIFO in SQS based S3 ( Splunk add-on for AWS ). - Is there a setting that must be enabled on Splunk addon for AWS when using SQS based S3 with queue... See more...
Hello Experts! This is questoin about queue type FIFO in SQS based S3 ( Splunk add-on for AWS ). - Is there a setting that must be enabled on Splunk addon for AWS when using SQS based S3 with queue type as FIFO? - Does  addon for AWS automatically recognize the queue type? Also I know that if we use FIFO in SQS, we cannot use event notification (SNS) from S3 to SQS. Does it mean Splunk only support S3 -> SNS -> SQS(Standard queue)? I just want to know how to use FIFO queue in add-on for AWS. Thanks in advance!!
Hi . I have use-case to add panel details for one of the dashboard .After the panel name need to add hyperlink .When I click on this it should pop up window with static information which i provided... See more...
Hi . I have use-case to add panel details for one of the dashboard .After the panel name need to add hyperlink .When I click on this it should pop up window with static information which i provided. Any suggestions ?
Hi All I am trying to extract the values that trail context, userid, username, groupid Sample partial event   { "type": "login","context": "Rsomeserver:8877-T1670321752-P18407-T030-C000025-S3... See more...
Hi All I am trying to extract the values that trail context, userid, username, groupid Sample partial event   { "type": "login","context": "Rsomeserver:8877-T1670321752-P18407-T030-C000025-S38","sequence": 998,"message": { "state": "ok","agent": true,"userid": "User0000000949","loginid": "somelogin101","ownerid": "system","username": "John Smith","cssurl": "[\"/css/somepage.css\",\"/branding/\"]","groupid": "Group0000000945","windows": [ {"name":"something","id":"someid","url":"/someurl//     I started with this approach     "context": "(?<SessionID>[^\"]*)".*?"username"+: "(?<Username>[^\"]*)"   And this seems to compile on regex101 but on rex it's throwing an error    Error in 'SearchParser': Missing a search command before '^'. Error at position '141' of search query 'search index=<removed> ("\"login\"\,\"contex...{snipped} {errorcontext = ?<userid>[^\"]*)"}'.   My aim is to then use this data to join on the  context value with another search, but I'm looking for help on where I'm going wrong with my Rex. As the JSON seems to be truncated, I don't think I can treat it as JSON, so any help with a REX extraction would be greatly appreciated.
Can someone please give me an explanation as to what the below rex command is doing. I do not understand the w+ s+ d+ etc........ | rex field=_raw "(?ms)^\\w+\\s+\\d+\\s+\\d+:\\d+:\\d+\\s+\\w+\\s... See more...
Can someone please give me an explanation as to what the below rex command is doing. I do not understand the w+ s+ d+ etc........ | rex field=_raw "(?ms)^\\w+\\s+\\d+\\s+\\d+:\\d+:\\d+\\s+\\w+\\s+\\w+\\s+\\w+:\\s+\\w+:\\s+\\w+\\s+\\w+:\\s+\\w+\\s+\\w+\\s+\\w+:\\s+\\d+\\s+\\w+\\s+\\w+:\\s+\\d+\\-\\d+\\-\\d+\\s+\\d+:\\d+:\\d+\\s+\\w+:\\s+   (?P<Time>[^ ]+)\\s+ (?P<Trn_Total>\\d+)\\s+ (?P<Trn_Interval>\\d+)\\s+ (?P<TPS>[^ ]+)\\s+ (?P<SW_Inbound>[^ ]+)\\s+ (?P<SW_Outbound>[^ ]+)\\s+ (?P<SW_Total>[^ ]+)\\s+ (?P<SW_Ext_Pmc>[^ ]+)\\s+ (?P<SW_Int_Pmc>\\d+\\.\\d+)" offset_field=_extracted_fields_bounds
I created a landing page for all applications.. but the login information is visible in url.. how can i change that xml.. I don't want to see the login information in url. Attached the code I am us... See more...
I created a landing page for all applications.. but the login information is visible in url.. how can i change that xml.. I don't want to see the login information in url. Attached the code I am using . <panel> <input type="dropdown" token="user_name" searchWhenChanged="true"> <label>User Name</label> <selectFirstChoice>true</selectFirstChoice> <search> <query> | rest /services/authentication/current-context splunk_server=local | fields username | eval username=mvindex(split(username,"@"),0) </query> </search> <fieldForLabel>username</fieldForLabel> <fieldForValue>username</fieldForValue> </input>       <div class="dropdown-content"> <a>$form.user_name$</a> <a href="*" target="_blank">My Profile</a> </div>     Any reply would be highly helpful..     Thanks, Splunk lover
I'm following the example provided here. https://docs.splunk.com/Documentation/Splunk/9.0.2/Workloads/AdmissionRules#Example_admission_rules search_time_range=alltime AND (NOT role=sc_admin) AND (N... See more...
I'm following the example provided here. https://docs.splunk.com/Documentation/Splunk/9.0.2/Workloads/AdmissionRules#Example_admission_rules search_time_range=alltime AND (NOT role=sc_admin) AND (NOT app=splunk_instance_monitoring) However when I look in the monitoring console it shows that it's blocking some things that I believe are built in searches. (we use splunk cloud) Cleanup Models For Predictive Analytics itsi_content_packs_status_update Telemetry - Inputs itsi_event_grouping Telemetry - Volume All of these things have user as "nobody". I tried to add AND (NOT user=nobody) to my workload rule, but tells me. validation failed with error=invalid value of predicate 'user'
I'd love to be able to track new AppInspect releases as they get released to PyPi.
Dear Splunk Community :   I have the following search query: <Basic_search> duration | stats count, avg(duration), perc99(duration), by path_template Attached please find a sample of the scre... See more...
Dear Splunk Community :   I have the following search query: <Basic_search> duration | stats count, avg(duration), perc99(duration), by path_template Attached please find a sample of the screen result for the above search.  
Dear Splunk Community: I have the following search query: <Basic_Search> | chart count by path_template, http_status_code | addtotals fieldname=total | foreach 2* 3* 4* 5* [ eval "percent_<<FIEL... See more...
Dear Splunk Community: I have the following search query: <Basic_Search> | chart count by path_template, http_status_code | addtotals fieldname=total | foreach 2* 3* 4* 5* [ eval "percent_<<FIELD>>"=round(100*'<<FIELD>>'/total,2), "<<FIELD>>"=if('<<FIELD>>'=0 , '<<FIELD>>', '<<FIELD>>'." (".'percent_<<FIELD>>'."%)")] | fields - percent_* total 2* 3* 4* Attached is the screen result of the above query which shows the 500s columns. I need to modify the above search so that it only displays the numbers where the percentage is great than 0.01%. How do i do that? Thanks!
Set up LDAP and attempted to set up Single Sign-On using reverse proxy: About Single Sign-On using reverse proxy - Splunk Documentation The settings did not take, so we removed the settings and rest... See more...
Set up LDAP and attempted to set up Single Sign-On using reverse proxy: About Single Sign-On using reverse proxy - Splunk Documentation The settings did not take, so we removed the settings and restarted. Now we get the "This browser is not supported by Splunk" error when we could previously see the login page. 
Hello All, Is it possible to download a custom app that has been vetted and loaded into splunk cloud? I have a customer who has uploaded apps and no longer has the source codebut I can't see anyway... See more...
Hello All, Is it possible to download a custom app that has been vetted and loaded into splunk cloud? I have a customer who has uploaded apps and no longer has the source codebut I can't see anyway to download - can this be done?  In particular the apps have python files that I need to access but the whole app would be good. Their Splunk Cloud version is 8.2.2203.4 with the Victoria experience. Thanks, Keith  
I have a .csv with this format (this is a mock, just to give you an idea of the pattern) code, message, 1, "Not found", 2, "Internal error", 3, "Success",   My search allow to do a stats count ... See more...
I have a .csv with this format (this is a mock, just to give you an idea of the pattern) code, message, 1, "Not found", 2, "Internal error", 3, "Success",   My search allow to do a stats count by code, but not by message. What I need to do is return a table with the message and their count.   What I have so far is this query, but it returns a table of code by count, but I need message by count (and all category must be return, even those with count of zero):   the search | append [input lookup the csv file] |stats count by message I tried to play with fields and table, but I don't get the desired result.
I have an app installed -- Splunk_TA_remedy -- and I'd like to change some configuration properties in the alert_actions.conf but I can't see a way to do this in the UI. I'm considering forking Splun... See more...
I have an app installed -- Splunk_TA_remedy -- and I'd like to change some configuration properties in the alert_actions.conf but I can't see a way to do this in the UI. I'm considering forking Splunk_TA_remedy and packaging these config as a separate app to install onto my deployment to override the config in Splunk_TA_remedy. In my Splunk Enterprise deployment I would simply make these changes within $SPLUNK_HOME/etc/apps/Splunk_TA_remedy/local/alert_actions.conf. How can I achieve the same in Splunk Cloud?