All Topics

Top

All Topics

Hello Everyone, I'm trying to write a custom Python Modular Input to fetch some HTML tables (all the Windows 10 release history tables) from the Microsoft Windows 10 Release Information. My idea is... See more...
Hello Everyone, I'm trying to write a custom Python Modular Input to fetch some HTML tables (all the Windows 10 release history tables) from the Microsoft Windows 10 Release Information. My idea is to create a modular input that runs once a month and uses pandas.read_html function to ingest all the Release History Tables and index all the rows into Splunk. I've figured out how to do the Python code but I've some issues with importing the pandas library into my custom app, I've read some Splunk Community posts and I've placed the exec_anaconda.py (from $SPLUNK_HOME\etc\apps\Splunk_SA_Scientific_Python_windows_x86_64\bin) inside %SPLUNK_HOME%\etc\apps\my_custom_app\bin and  also added the util folder (from %SPLUNK_HOME%\etc\apps\Splunk_ML_Toolkit\bin\utils) to avoid  the "ModuleNotFoundError: No module named 'util'" Python Exception. Also, as stated in the PSC README, I've placed the following lines right under the beginning of def collect_events(...) function:     def collect_event(helper, ew): import exec_anaconda exec_anaconda.exec_anaconda() import pandas ...     I keep getting the error: "ERROR Error encountered while loading Python for Scientific Computing, see search.log." But obviusly the search.log file is empty since this is not a SPL search.   Is it possible to use the PSC libraries inside my modular input to accomplish this? Thank you.
Hi Team, We found a lot of faults in Error transaction as HTTP Error 400 as it were normal behavior of the application. But, it was bothering the customer as it was showing a high % of Error BTs. ... See more...
Hi Team, We found a lot of faults in Error transaction as HTTP Error 400 as it were normal behavior of the application. But, it was bothering the customer as it was showing a high % of Error BTs. Is it possible to configure to ignore (exclude) the URL just when it becomes HTTP Error 400, but keep monitoring it for other normal state or another abnormal error state?  We want to add the URL: /axway/bill-payment/fetch-biller-info I have read through the Error Exception menu, unfortunately, there was not any way to exclude/ignore a specific URL, but how to exclude/ignore HTTP Error as a whole. Can anyone offer advice on how to possibly do it?
Hi everyone,    When I search for multiple items from multiselect, it is not working. I can search for "ALL" or one item only but not multiple items.  Here is the search:  index="billing_sales" ... See more...
Hi everyone,    When I search for multiple items from multiselect, it is not working. I can search for "ALL" or one item only but not multiple items.  Here is the search:  index="billing_sales" source="produced_movie" NAME_ENG IN ("$field1$") | stats sum(AMOUNT) as TOTAL   How do I change the above search so that I can look up multiple field1s?     {     "visualizations": {         "viz_7sJt3IPY": {             "type": "splunk.singlevalue",             "options": {                 "backgroundColor": "transparent",                 "majorColor": "#f8be44"             },             "dataSources": {                 "primary": "ds_i9R3dB04"             }         }     },     "dataSources": {         "ds_DCcDyt7v": {             "type": "ds.search",             "options": {                 "query": "index=\"billing_sales\" source=\"produced_movie_ddish\" \n| table CARD_NUMBER, NAME_ENG, DESCR, AMOUNT, PRODUCT_ID, TRANS_DATE, CONTENT_ID, PRODUCT_ID"             },             "name": "Search_1"         },         "ds_dCpthBJm": {             "type": "ds.chain",             "options": {                 "extend": "ds_DCcDyt7v",                 "query": "| stats count by NAME_ENG"             },             "name": "content_name"         },         "ds_i9R3dB04": {             "type": "ds.search",             "options": {                 "query": "index=\"billing_sales\" source=\"produced_movie_ddish\" NAME_ENG IN (\"$field1$\") \n| stats sum(AMOUNT) as DDISH_TOTAL"             },             "name": "Search_2"         }     },     "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$global_time.latest$",                         "earliest": "$global_time.earliest$"                     }                 }             }         }     },     "inputs": {         "input_global_trp": {             "type": "input.timerange",             "options": {                 "token": "global_time",                 "defaultValue": "-24h@h,now"             },             "title": "Global Time Range"         },         "input_1PggimcS": {             "options": {                 "items": [                     {                         "label": "All",                         "value": "*"                     }                 ],                 "defaultValue": "*",                 "token": "field1",                 "clearDefaultOnSelection": true             },             "dataSources": {                 "primary": "ds_dCpthBJm"             },             "title": "CONTENT_NAME",             "context": {                 "formattedConfig": {                     "number": {                         "prefix": ""                     }                 }             },             "type": "input.multiselect"         }     },     "layout": {         "type": "absolute",         "options": {             "display": "auto-scale",             "backgroundColor": "#294e70"         },         "structure": [             {                 "item": "viz_7sJt3IPY",                 "type": "block",                 "position": {                     "x": 20,                     "y": 10,                     "w": 200,                     "h": 90                 }             }         ],         "globalInputs": [             "input_global_trp",             "input_1PggimcS"         ]     },     "description": "",     "title": "content_producing_report" }
I have a folder with logs, every hour one logfile is written to it.  I also have an alert that is triggered when no file is written in the last hour (checking 15 minutes post a hour) Query: index=... See more...
I have a folder with logs, every hour one logfile is written to it.  I also have an alert that is triggered when no file is written in the last hour (checking 15 minutes post a hour) Query: index=xyz sourcetype=abc | eval since = now() - _time | search since < 3600 Mostly it works but sometimes it's triggered even if I can see in the history that logfile for that hour is present in splunk with acurate _time and nothing is missing What could be the problem?
Hi, I want to create a table from the sample log file entry by computing the field names based on the entries defined in the JSON structure. There will be multiple filed names and not just one.  e.... See more...
Hi, I want to create a table from the sample log file entry by computing the field names based on the entries defined in the JSON structure. There will be multiple filed names and not just one.  e.g. in, the JSON structure, it has entries like "something":"value" "something" will be the field name, and "value" will be the value that will form the table entries. By working in https://regex101.com I have got the regex query that is doing the job. However, when I try to put that in the Splunk search query, it does not like the "]" in the regex query I have generated.   This is the regex query: "((?:[^"\\\/\b\f\n\r\t]|\\u\d{4})*)" Query in Splunk  | rex "((?:[^"\\\/\b\f\n\r\t]|\\u\d{4})*)" Error in Splunk : Error in 'SearchParser': Mismatched ']'. This is the sample log: ------------------- 2022/08/31 04:33:10.897 | server| service| INFO | 1-223 |x.x.x.x.x.Payload | xxx-1111-1111111-11-111111111 | AAt: Update Headers: {AAgid=ID:jaaana-11111-1111111111111-3:487:1:1:50, cccc_ff_ssss=ABC_XYZ, ssssdel=false, cdmode=1, DelMode=2, abc_corel_id=xyx-11111-11111-11-111111, aa_rrr_cccc_cccc=AAAA, cust_svc_id=AAAA-DDD, crumberid=xyx-11111-11111-11-111111, svc_tran_origin=SSS, SSScoreed=Camel-SSS-1111-1111111-111, cccc_ff_ssss_aaaaa=AAAA, AAAType=null, cccc_ff_ssss_tata=AAA, AAAexxxx=0, avronnnn=url.add.add.com, AAAssssssss=1661920390882,tang_dik_jagah=ABC_XYZ, ver=0.1.2, AAAprrrrrr=4, AAArptooo=null, source_DOT_adaptr=mom, AAAjaaana=tAAic://toic,tang_dik_jagah_tata=AAA, targCTService=progr, SSScoreedAsBytes=[a@123, CamelAAARequestTimeout=600000, sedaTimeout=600000} {[{"type":"AAtiongo","pAAo":"AAAA","ssssssss":"2022-08-31 00:00:00","data":[{"chabbbi":"ca_1111_11111_AAtiongo_AAAA","tatajahajqaki":"AA 111","jahajqaki":{"numeo":"111","jahaaj":{"cde":"ARL_AA","couAAa":"AA","aaoo":"AAR"},"AAsuf":null},"sgnnnn":"AAR111","stppp":"J","muddStatuscde":"AA","kissak":"III","AAType3lc":"111","AAType5lc":"B111","rggggggg":"AAAAA","carrrrr":{"cde":"ARL_AA","couAAa":"AA","aaoo":"AAR"},"ddddddcde":"pubbb","pubbbjahajqaki":"AA 111","jahajqakipubbb":{"numeo":"111","jahaaj":{"cde":"AA","couAAa":null,"aaoo":null}},"sssss":1098,"kkkkkss":834,"kitnaba":{"AAAAAA":"2022-08-2100:00:00","WWWW":"2022-08-2100:00:00","eeeeee":"2022-08-2100:00:00","sssssss":"2022-08-2100:00:00","ddddddd":"2022-08-2100:00:00","eeeeeeee":"2022-08-2100:00:00","ddddddddd":"2022-08-2100:00:00","ttttttt":"2022-08-2100:00:00","ttttttt":"2022-08-2100:00:00","Edddddd":"2022-08-2100:00:00","ffffff":"2022-08-2100:00:00","ddddddL":"2022-08-2100:00:00","dddddd":"2022-08-2100:00:00","Adddddd":"2022-08-2100:00:00","ssssT":"2022-08-2100:00:00","ddddd":"2022-08-2100:00:00","ggggg":"2022-08-2100:00:00","ffffff":"2022-08-2100:00:00","Eddddd":"2022-08-2100:00:00","ssssss":"2022-08-2100:00:00","Eddddd":"2022-08-2100:00:00"},"durdddd":{"Exxxxx":"Pdddd.oo","ScfffTTTT":"xxx1H0M0.000S","xxxxIDL":"-Pxxxx6M0.000S","ESTTTT":"PxxxxH26M0.000S"},"gallle":[{"aaaaaaa":"aaa000033","gffffnnnn":"111"}],"stsssss":[{"hhhhhh":"AA1111111","standnnnn":"S20"}],"blttttt":[{"hhhhhh":"ABB000003","beltnnnn":"aa11","beltAAenpttttt":"2022-08-2100:00:00","kkkkkkkpttttt":"2022-08-2100:00:00"}],"redddddd":{"SSSSS":[{"aalllll":"ALLUU99999","resssssss":"AA1111111","resssssssnnnn":"S20","pprrrrrsssss":"AAA11111"}],"bgggg_blt":[{"aalllll":"aaaaaa1111111","resssssss":"ABB000003","resssssssnnnn":"IB02","kitnaba":{"AAAAAA":"2022-08-31006:14:00a","AAAAAA":"2022-08-31006:14:00a"}}],"aaaaaaaaaaa_sss":[{"aalllll":"aaaaaa8888888","resssssss":"false"}],"aaaaaaaaaa_ssss":[{"aalllll":"aaaaaa8888888","resssssss":"GAT000033","resssssssnnnn":"120","pprrrrrsssss":"GAT000019"}],"qqqqqqqqqqqq":[{"aalllll":"qqqqqqqqqqqq","resssssss":"false"}]},"kkkkkk":[{"cde":"aaa_sss","tatAAde":"CAI","aaaaAAde":"PPPP","legnumeo":1},{"cde":"ABC_XYZ","tatAAde":"AAA","aaaaAAde":"AAAA","legnumeo":2}],"cdeshareList":[{"numeo":"1111","jahaaj":{"cde":"ARL_AA","couAAa":"AA","aaoo":"AAA"},"AAsuf":null,"pubbbjahajqaki":"AA 1111","jahajqakipubbb":{"numeo":"1111","jahaaj":{"cde":"AA","couAAa":null,"aaoo":null}}},{"numeo":"1111","jahaaj":{"cde":"ARL_CT","couAAa":"CT","aaoo":"CTH"},"AAsuf":null,"pubbbjahajqaki":"CT 1111","jahajqakipubbb":{"numeo":"1111","jahaaj":{"cde":"CT","couAAa":null,"aaoo":null}}}],"saaaaaa":{"ffff":"RRR","mapr":"Finalised","SSSGeneral":"AAened","AAceptance":"Finalised","loddacctrr":"SheCT_Finalised","brrrrrrdd":"AAened","IIIernal":"110"}}]}]} host = mucAAuplfrAA02 -----------------------  
So I have these datasets uploaded to my Splunk Enterprise instance in Windows.  But under the little Edit menu there is no "Delete" Option   Is there a way to delete this manually or forcefull... See more...
So I have these datasets uploaded to my Splunk Enterprise instance in Windows.  But under the little Edit menu there is no "Delete" Option   Is there a way to delete this manually or forcefully?  
I have this event: (pool-4-thread-1 18a68b34-f4af-4940-9339-6201b5004bb8) (********): do_SMSGW (Request) : &from=TULBUR&to=********&text=*******:+Tanii+********+gereenii+tulburiin+uldegdel+59706.4... See more...
I have this event: (pool-4-thread-1 18a68b34-f4af-4940-9339-6201b5004bb8) (********): do_SMSGW (Request) : &from=TULBUR&to=********&text=*******:+Tanii+********+gereenii+tulburiin+uldegdel+59706.42T+tulbur+tulugduugui+tul+buh+heregleeg+2022-08-28-nd+haahiig+anhaarna+uu. (pool-4-thread-2 3adfc9d2-86e3-4e6e-8767-08f94370075a) (********): do_SMSGW (Request) : &from=TULBUR&to=********&text=*******:+Tanii+********+gereenii+tulburiin+uldegdel+9900T+tulbur+tulugduugui+tul+buh+heregleeg+2022-08-28-nd+haahiig+anhaarna+uu. And I need to get value between +uldegdel+"needed value"+tulbur+ please help, im new to splunk 
Hi, I have a search that uses the chart command to split by 2 fields, such that the results are shown below. The data is split by Name and Month. I would like to add a row with the average o... See more...
Hi, I have a search that uses the chart command to split by 2 fields, such that the results are shown below. The data is split by Name and Month. I would like to add a row with the average of all Names for each month, and a column with the average of all Months for each Name. I have tried using appendpipe and appendcols for each case, but couldn't quite figure out the syntax using a chart command. PS. Each row is already an appended subsearch.
I am using the Splunk Observability REST API, specifically "/apm/trace" endpoint.   I have following questions about throttling limits, which triggers HTTP code 429:  - What is the limit? - If thi... See more...
I am using the Splunk Observability REST API, specifically "/apm/trace" endpoint.   I have following questions about throttling limits, which triggers HTTP code 429:  - What is the limit? - If this is configured anywhere within Splunk Observability? - What types of throttling?  such as Hard, Soft, or Elastic / Dynamic - Does it use fixed window or rolling window, with or without counters? Thanks!
Hi,  Regarding Span, may I ask if the Span ID should always be unique, meaning no 2 different span with same Span ID? Thanks
Hello, Any suggestions on onboarding Cradlepoint Router logs to Splunk? Please advise.   Thanks in advance.
Hi All,   I am trying to search difference between 2 search:   search 1:    index="xxx_prd" "/XX900/LT_TEST"   this is returning like 20 records.   search 2: index="xxx_prd" "ht... See more...
Hi All,   I am trying to search difference between 2 search:   search 1:    index="xxx_prd" "/XX900/LT_TEST"   this is returning like 20 records.   search 2: index="xxx_prd" "http://xxx.yyy.com/XX900/LT_TEST" this is returning 15 records.   I want to get the 5 results which are different between search 1 and search 2.   pl advice.   Thanks Yatan
We are receiving error from _internal index  for Json logs: 1. error: ERROR JsonLineBreaker - JSON StreamId:1254678906 had parsing error:Unexpected character: "s" 2. error: ERROR JsonLineBreaker ... See more...
We are receiving error from _internal index  for Json logs: 1. error: ERROR JsonLineBreaker - JSON StreamId:1254678906 had parsing error:Unexpected character: "s" 2. error: ERROR JsonLineBreaker - JSON StreamId:1254678906 had parsing error:Unexpected character: "a"  sample logs: {  [-]        level: debug       message: Creating a new instance of inquiry call       timestamp: 2022-08-25T20:30:45.678Z }   my props.conf: TIME_PREFIX=timestamp" : " TIME_FORMAT= %Y-%m-%dT%H:%M:%S.%3N MAX_TIMESTAMP_LOOKAHEAD=40 TZ=UTC how to resolve this issue.
Hello everyone I have been trying to understand how this alert works because for my point of view doesn't make sense. This message NEVER disappears from our splunk instances and I have been trying ... See more...
Hello everyone I have been trying to understand how this alert works because for my point of view doesn't make sense. This message NEVER disappears from our splunk instances and I have been trying to catch the real root cause but I don't have clear the way this works. I have this message: The percentage of small buckets (75%) created over the last hour is high and exceeded the red thresholds (50%) for index=foo, and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=11, small buckets=8 So I checked if the logs have Time parsing issue and there are not issues with the logs indexed by foo index. Then I checked with this search: index=_internal sourcetype=splunkd component=HotBucketRoller "finished moving hot to warm" | eval bucketSizeMB = round(size / 1024 / 1024, 2) | table _time splunk_server idx bid bucketSizeMB | rename idx as index | join type=left index [ | rest /services/data/indexes count=0 | rename title as index | eval maxDataSize = case (maxDataSize == "auto", 750, maxDataSize == "auto_high_volume", 10000, true(), maxDataSize) | table index updated currentDBSizeMB homePath.maxDataSizeMB maxDataSize maxHotBuckets maxWarmDBCount ] | eval bucketSizePercent = round(100*(bucketSizeMB/maxDataSize)) | eval isSmallBucket = if (bucketSizePercent < 10, 1, 0) | stats sum(isSmallBucket) as num_small_buckets count as num_total_buckets by index splunk_server | eval percentSmallBuckets = round(100*(num_small_buckets/num_total_buckets)) | sort - percentSmallBuckets | eval isViolation = if (percentSmallBuckets > 30, "Yes", "No") | search isViolation = Yes | stats count   I ran that search for the last 2 days and the result is ZERO But the red-flag is still there... So I am not understanding what is going on. Here is the log the indicate that foo is rolling from hot to warm 08-30-2022 02:12:27.121 -0400 INFO HotBucketRoller [1405281 indexerPipe] - finished moving hot to warm bid=foo~19~AAD3329E-C8D9-4607-90FB-167760B4EB6F idx=foo from=hot_v1_19 to=db_1661054400_1628568000_19_AAD3329E-C8D9-4607-90FB-167760B4EB6F size=797286400 caller=size_exceeded _maxHotBucketSize=786432000 (750MB), bucketSize=797315072 (760MB) So as I can see the reason is logic caller=size_exceeded due to the size. Just for information this index receives data just once a day midnight. If you have any inputs I would really appreciate it.   Version 8.2.2
Hello  - I am getting the below error. I am trying to add pipe "|"  for all the results.  Error : Failed to parse templatized search for field 'ResponseTime(ms)' My search :  | table PeriodDate... See more...
Hello  - I am getting the below error. I am trying to add pipe "|"  for all the results.  Error : Failed to parse templatized search for field 'ResponseTime(ms)' My search :  | table PeriodDate VendorName ContractName OccMetricCode Pagekey TransactionType TransactionDatetime ResponseTime(ms) Comment | foreach * [ eval <<FIELD>>="|".<<FIELD>>."|"]    I am not getting pipe seperated results only for ResponseTime PeriodDate  ResponseTime(ms)  Comment  |2022/08/30| 0 ||   Thanks in advance  
I have a standalone instance with existing data on it. I have created a new indexer cluster that does not include this standalone machine. All instances are running the same OS and Splunk version. ... See more...
I have a standalone instance with existing data on it. I have created a new indexer cluster that does not include this standalone machine. All instances are running the same OS and Splunk version. Can I add the existing data to the cluster by adding the standalone instance to the cluster as a peer? What will the behavior be in such a case?  I'm aware of the bucket copying method, but I'm hoping there's a more hands-off method to accomplish this goal. 
Hello, I am looking at https://docs.splunk.com/Documentation/Splunk/9.0.0/Capacity/Parallelization and was wondering which systems to make changes on. For instance: batch parallelization: should ... See more...
Hello, I am looking at https://docs.splunk.com/Documentation/Splunk/9.0.0/Capacity/Parallelization and was wondering which systems to make changes on. For instance: batch parallelization: should the limits be changed on the search heads, indexers, or both? same question for datamodels, report acceleration and indexer parallelization. Oh for what it's worth, I am running splunk enterprise 9, on a C1/C11 deployment. -jason  
I have the following 2 logs DRT.log:  This consists of the following log lines:   {"date_time":"20220823-13:11:11.622475033","severity":"INFO","dc":"DRT"} {"date_time":"20220823-13:11:11.62247... See more...
I have the following 2 logs DRT.log:  This consists of the following log lines:   {"date_time":"20220823-13:11:11.622475033","severity":"INFO","dc":"DRT"} {"date_time":"20220823-13:11:11.622475099","severity":"INFO","version":"1.1.1"} {"date_time":"20220823-13:11:11.622475099","severity":"INFO","state":"running"}   And CME.log: This consists of the following logs lines:   {"date_time":"20220823-13:11:11.622475033","severity":"INFO","dc":"CME"} {"date_time":"20220823-13:11:11.622475099","severity":"INFO","version":"2.2.2"} {"date_time":"20220823-13:11:11.622475033","severity":"INFO","state":"down"}   The output I want to display is a table that looks like the following:   DataCenter Version State DRT 1.1.1 running CME 2.2.2 down   I have noticed that if I specify the explicit source file then them my search query works for that individual source.   As example:    index=exc_md_qa sourcetype="ctc:md:tickerplant" source="/splunk_log/DRT.log" | spath | search severity="INFO" | dc, version, state | stats values(dc) as DataCenter latest(version) as Version latest(state) as State This above search returns: DataCenter Version State DRT 1.1.1 running   And likewise if I replace the source with the other log file, I get this...   index=exc_md_qa sourcetype="ctc:md:tickerplant" source="/splunk_log/CME.log" | spath | search severity="INFO" | fields dc, version, state | stats values(dc) as DataCenter latest(version) as Version latest(state) as State This search yields the following: DataCenter Version State CME 2.2.2 down   However if I run the search with a wildcard for the source, I only get partial results...     index=exc_md_qa sourcetype="ctc:md:tickerplant" source="/splunk_log/*.log" | spath | severity="INFO" | fields dc, version | stats values(dc) as DataCenter latest(version) as Version latest(state) as State This yields the following (with missing data from DRT) DataCenter Version State CME 2.2.2 down DRT Or sorting by DataCenter then I don;t get the state at all... index=exc_md_qa sourcetype="ctc:md:tickerplant" source="/splunk_log/*.log" | spath | severity="INFO" | fields dc, version | stats latest(version) as Version latest(state) as State by dc This yields: DataCenter Version State CME 2.2.2 DRT 1.1.1   So the question is how do I combine them into one search.  I think the brunt of the issue is tying the dc, state and version fields to the same source, but not sure how to do that   Any help is much appreciated!  
Hi There, I have a requirement where i have an index with two different sources. index=a sourcetype=a1 index=a sourcetype=a2 Now i have a column in common between these two sourcetypes. (ex: ... See more...
Hi There, I have a requirement where i have an index with two different sources. index=a sourcetype=a1 index=a sourcetype=a2 Now i have a column in common between these two sourcetypes. (ex: corrlId). I want to display those records which are in source type a1 but not in a2. Would some one tell how to achieve this?   my rough query which i am working on is this: index=a sourcetype=a1 | search "*" trackrequest | eval EDT_time = strftime(_time ,"%Y-%m-%d %H:%M:%S") | rename a.corrlId as CorrlID, EDT_time as "TimeStamp1" | join type=left correlId [search index=a sourcetype=a2 | search "*" trackrequest | eval EDT_time = strftime(_time ,"%Y-%m-%d %H:%M:%S") | rename a.corrlId as CorrlID, EDT_time as "TimeStamp2" ] | table "TimeStamp1", CorrlID, "TimeStamp2"   For my query a single record is repeating n number of times in output with out actually giving me the desired result which is giving all distinct missing values.              
I just upgraded a dev instance from 7.3.4 to 9.0.1, and splunkd would start but the web UI stopped working. Found these in splunkd.log: 08-30-2022 12:43:16.300 -0400 ERROR UiPythonFallback [22665 W... See more...
I just upgraded a dev instance from 7.3.4 to 9.0.1, and splunkd would start but the web UI stopped working. Found these in splunkd.log: 08-30-2022 12:43:16.300 -0400 ERROR UiPythonFallback [22665 WebuiStartup] - Couldn't start appserver process on port 8065: Appserver at http://127.0.0.1:8065 never started up. Set `appServerProcessLogStderr` to "true" under [settings] in web.conf. Restart, try the operation again, and review splunkd.log for any messages that contain "UiAppServer - From appserver" 08-30-2022 12:43:16.300 -0400 ERROR UiPythonFallback [22665 WebuiStartup] - Couldn't start any appserver processes, UI will probably not function correctly! 08-30-2022 12:43:16.300 -0400 ERROR UiHttpListener [22665 WebuiStartup] - No app server is running, stop initializing http server However, after adding the "appServerProcessLogStderr = true" setting to web.conf, I only see this one line in splunkd.log: 08-30-2022 12:48:53.628 -0400 INFO UiAppServer [28199 appserver-stderr] - Starting stderr collecting thread No message with "UiAppServer" after that. Any thoughts / help would be much appreciated!