All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, we had standalone search head and indexer in a pre-production environment then I created new clustered environment with 2 sh and 2 idx. I want to add those old non-clustered search head an... See more...
Hello, we had standalone search head and indexer in a pre-production environment then I created new clustered environment with 2 sh and 2 idx. I want to add those old non-clustered search head and indexer, could you let me know the right commands/procedures to add them to the existing cluster? Do I need to remove all Splunk instances and reinstall from scratch? I understand old non-clustered data may be removed but this is not a problem as mostly frozen. Thanks for your help.
Is there a more elegant way to do this? New to using rex & I can’t seem to strip out the multiple parentheses and slashes from a field without using replace.  (I don't have control over the data, I k... See more...
Is there a more elegant way to do this? New to using rex & I can’t seem to strip out the multiple parentheses and slashes from a field without using replace.  (I don't have control over the data, I know it is better to strip it out first.) These do work but in some cases there are more parentheses and slashes - is there a way to strip all of them out at once, or do I need to make repeating phrases? | rex mode=sed field=Field_A "s/\(\)/ /g" | rex mode=sed field=Field_B "s/\(\)/ /g" | rex mode=sed field=Field_B "s/\// /g"
After we upgraded to v9.0.1 we get a warning when following dashboard-generated links pointing "outside" splunk: Redirecting away from Splunk You are being redirected away from Splunk to ... See more...
After we upgraded to v9.0.1 we get a warning when following dashboard-generated links pointing "outside" splunk: Redirecting away from Splunk You are being redirected away from Splunk to the following URL: https://[some non-splunk web-server] Note that tokens embedded in a URL could contain sensitive information. It comes with a "Don't show again" option, but it indeed shows again every time. Is there somewhere to disable this warning? Thanks
Hello Everyone, I'm trying to write a custom Python Modular Input to fetch some HTML tables (all the Windows 10 release history tables) from the Microsoft Windows 10 Release Information. My idea is... See more...
Hello Everyone, I'm trying to write a custom Python Modular Input to fetch some HTML tables (all the Windows 10 release history tables) from the Microsoft Windows 10 Release Information. My idea is to create a modular input that runs once a month and uses pandas.read_html function to ingest all the Release History Tables and index all the rows into Splunk. I've figured out how to do the Python code but I've some issues with importing the pandas library into my custom app, I've read some Splunk Community posts and I've placed the exec_anaconda.py (from $SPLUNK_HOME\etc\apps\Splunk_SA_Scientific_Python_windows_x86_64\bin) inside %SPLUNK_HOME%\etc\apps\my_custom_app\bin and  also added the util folder (from %SPLUNK_HOME%\etc\apps\Splunk_ML_Toolkit\bin\utils) to avoid  the "ModuleNotFoundError: No module named 'util'" Python Exception. Also, as stated in the PSC README, I've placed the following lines right under the beginning of def collect_events(...) function:     def collect_event(helper, ew): import exec_anaconda exec_anaconda.exec_anaconda() import pandas ...     I keep getting the error: "ERROR Error encountered while loading Python for Scientific Computing, see search.log." But obviusly the search.log file is empty since this is not a SPL search.   Is it possible to use the PSC libraries inside my modular input to accomplish this? Thank you.
Hi Team, We found a lot of faults in Error transaction as HTTP Error 400 as it were normal behavior of the application. But, it was bothering the customer as it was showing a high % of Error BTs. ... See more...
Hi Team, We found a lot of faults in Error transaction as HTTP Error 400 as it were normal behavior of the application. But, it was bothering the customer as it was showing a high % of Error BTs. Is it possible to configure to ignore (exclude) the URL just when it becomes HTTP Error 400, but keep monitoring it for other normal state or another abnormal error state?  We want to add the URL: /axway/bill-payment/fetch-biller-info I have read through the Error Exception menu, unfortunately, there was not any way to exclude/ignore a specific URL, but how to exclude/ignore HTTP Error as a whole. Can anyone offer advice on how to possibly do it?
Hi everyone,    When I search for multiple items from multiselect, it is not working. I can search for "ALL" or one item only but not multiple items.  Here is the search:  index="billing_sales" ... See more...
Hi everyone,    When I search for multiple items from multiselect, it is not working. I can search for "ALL" or one item only but not multiple items.  Here is the search:  index="billing_sales" source="produced_movie" NAME_ENG IN ("$field1$") | stats sum(AMOUNT) as TOTAL   How do I change the above search so that I can look up multiple field1s?     {     "visualizations": {         "viz_7sJt3IPY": {             "type": "splunk.singlevalue",             "options": {                 "backgroundColor": "transparent",                 "majorColor": "#f8be44"             },             "dataSources": {                 "primary": "ds_i9R3dB04"             }         }     },     "dataSources": {         "ds_DCcDyt7v": {             "type": "ds.search",             "options": {                 "query": "index=\"billing_sales\" source=\"produced_movie_ddish\" \n| table CARD_NUMBER, NAME_ENG, DESCR, AMOUNT, PRODUCT_ID, TRANS_DATE, CONTENT_ID, PRODUCT_ID"             },             "name": "Search_1"         },         "ds_dCpthBJm": {             "type": "ds.chain",             "options": {                 "extend": "ds_DCcDyt7v",                 "query": "| stats count by NAME_ENG"             },             "name": "content_name"         },         "ds_i9R3dB04": {             "type": "ds.search",             "options": {                 "query": "index=\"billing_sales\" source=\"produced_movie_ddish\" NAME_ENG IN (\"$field1$\") \n| stats sum(AMOUNT) as DDISH_TOTAL"             },             "name": "Search_2"         }     },     "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$global_time.latest$",                         "earliest": "$global_time.earliest$"                     }                 }             }         }     },     "inputs": {         "input_global_trp": {             "type": "input.timerange",             "options": {                 "token": "global_time",                 "defaultValue": "-24h@h,now"             },             "title": "Global Time Range"         },         "input_1PggimcS": {             "options": {                 "items": [                     {                         "label": "All",                         "value": "*"                     }                 ],                 "defaultValue": "*",                 "token": "field1",                 "clearDefaultOnSelection": true             },             "dataSources": {                 "primary": "ds_dCpthBJm"             },             "title": "CONTENT_NAME",             "context": {                 "formattedConfig": {                     "number": {                         "prefix": ""                     }                 }             },             "type": "input.multiselect"         }     },     "layout": {         "type": "absolute",         "options": {             "display": "auto-scale",             "backgroundColor": "#294e70"         },         "structure": [             {                 "item": "viz_7sJt3IPY",                 "type": "block",                 "position": {                     "x": 20,                     "y": 10,                     "w": 200,                     "h": 90                 }             }         ],         "globalInputs": [             "input_global_trp",             "input_1PggimcS"         ]     },     "description": "",     "title": "content_producing_report" }
I have a folder with logs, every hour one logfile is written to it.  I also have an alert that is triggered when no file is written in the last hour (checking 15 minutes post a hour) Query: index=... See more...
I have a folder with logs, every hour one logfile is written to it.  I also have an alert that is triggered when no file is written in the last hour (checking 15 minutes post a hour) Query: index=xyz sourcetype=abc | eval since = now() - _time | search since < 3600 Mostly it works but sometimes it's triggered even if I can see in the history that logfile for that hour is present in splunk with acurate _time and nothing is missing What could be the problem?
Hi, I want to create a table from the sample log file entry by computing the field names based on the entries defined in the JSON structure. There will be multiple filed names and not just one.  e.... See more...
Hi, I want to create a table from the sample log file entry by computing the field names based on the entries defined in the JSON structure. There will be multiple filed names and not just one.  e.g. in, the JSON structure, it has entries like "something":"value" "something" will be the field name, and "value" will be the value that will form the table entries. By working in https://regex101.com I have got the regex query that is doing the job. However, when I try to put that in the Splunk search query, it does not like the "]" in the regex query I have generated.   This is the regex query: "((?:[^"\\\/\b\f\n\r\t]|\\u\d{4})*)" Query in Splunk  | rex "((?:[^"\\\/\b\f\n\r\t]|\\u\d{4})*)" Error in Splunk : Error in 'SearchParser': Mismatched ']'. This is the sample log: ------------------- 2022/08/31 04:33:10.897 | server| service| INFO | 1-223 |x.x.x.x.x.Payload | xxx-1111-1111111-11-111111111 | AAt: Update Headers: {AAgid=ID:jaaana-11111-1111111111111-3:487:1:1:50, cccc_ff_ssss=ABC_XYZ, ssssdel=false, cdmode=1, DelMode=2, abc_corel_id=xyx-11111-11111-11-111111, aa_rrr_cccc_cccc=AAAA, cust_svc_id=AAAA-DDD, crumberid=xyx-11111-11111-11-111111, svc_tran_origin=SSS, SSScoreed=Camel-SSS-1111-1111111-111, cccc_ff_ssss_aaaaa=AAAA, AAAType=null, cccc_ff_ssss_tata=AAA, AAAexxxx=0, avronnnn=url.add.add.com, AAAssssssss=1661920390882,tang_dik_jagah=ABC_XYZ, ver=0.1.2, AAAprrrrrr=4, AAArptooo=null, source_DOT_adaptr=mom, AAAjaaana=tAAic://toic,tang_dik_jagah_tata=AAA, targCTService=progr, SSScoreedAsBytes=[a@123, CamelAAARequestTimeout=600000, sedaTimeout=600000} {[{"type":"AAtiongo","pAAo":"AAAA","ssssssss":"2022-08-31 00:00:00","data":[{"chabbbi":"ca_1111_11111_AAtiongo_AAAA","tatajahajqaki":"AA 111","jahajqaki":{"numeo":"111","jahaaj":{"cde":"ARL_AA","couAAa":"AA","aaoo":"AAR"},"AAsuf":null},"sgnnnn":"AAR111","stppp":"J","muddStatuscde":"AA","kissak":"III","AAType3lc":"111","AAType5lc":"B111","rggggggg":"AAAAA","carrrrr":{"cde":"ARL_AA","couAAa":"AA","aaoo":"AAR"},"ddddddcde":"pubbb","pubbbjahajqaki":"AA 111","jahajqakipubbb":{"numeo":"111","jahaaj":{"cde":"AA","couAAa":null,"aaoo":null}},"sssss":1098,"kkkkkss":834,"kitnaba":{"AAAAAA":"2022-08-2100:00:00","WWWW":"2022-08-2100:00:00","eeeeee":"2022-08-2100:00:00","sssssss":"2022-08-2100:00:00","ddddddd":"2022-08-2100:00:00","eeeeeeee":"2022-08-2100:00:00","ddddddddd":"2022-08-2100:00:00","ttttttt":"2022-08-2100:00:00","ttttttt":"2022-08-2100:00:00","Edddddd":"2022-08-2100:00:00","ffffff":"2022-08-2100:00:00","ddddddL":"2022-08-2100:00:00","dddddd":"2022-08-2100:00:00","Adddddd":"2022-08-2100:00:00","ssssT":"2022-08-2100:00:00","ddddd":"2022-08-2100:00:00","ggggg":"2022-08-2100:00:00","ffffff":"2022-08-2100:00:00","Eddddd":"2022-08-2100:00:00","ssssss":"2022-08-2100:00:00","Eddddd":"2022-08-2100:00:00"},"durdddd":{"Exxxxx":"Pdddd.oo","ScfffTTTT":"xxx1H0M0.000S","xxxxIDL":"-Pxxxx6M0.000S","ESTTTT":"PxxxxH26M0.000S"},"gallle":[{"aaaaaaa":"aaa000033","gffffnnnn":"111"}],"stsssss":[{"hhhhhh":"AA1111111","standnnnn":"S20"}],"blttttt":[{"hhhhhh":"ABB000003","beltnnnn":"aa11","beltAAenpttttt":"2022-08-2100:00:00","kkkkkkkpttttt":"2022-08-2100:00:00"}],"redddddd":{"SSSSS":[{"aalllll":"ALLUU99999","resssssss":"AA1111111","resssssssnnnn":"S20","pprrrrrsssss":"AAA11111"}],"bgggg_blt":[{"aalllll":"aaaaaa1111111","resssssss":"ABB000003","resssssssnnnn":"IB02","kitnaba":{"AAAAAA":"2022-08-31006:14:00a","AAAAAA":"2022-08-31006:14:00a"}}],"aaaaaaaaaaa_sss":[{"aalllll":"aaaaaa8888888","resssssss":"false"}],"aaaaaaaaaa_ssss":[{"aalllll":"aaaaaa8888888","resssssss":"GAT000033","resssssssnnnn":"120","pprrrrrsssss":"GAT000019"}],"qqqqqqqqqqqq":[{"aalllll":"qqqqqqqqqqqq","resssssss":"false"}]},"kkkkkk":[{"cde":"aaa_sss","tatAAde":"CAI","aaaaAAde":"PPPP","legnumeo":1},{"cde":"ABC_XYZ","tatAAde":"AAA","aaaaAAde":"AAAA","legnumeo":2}],"cdeshareList":[{"numeo":"1111","jahaaj":{"cde":"ARL_AA","couAAa":"AA","aaoo":"AAA"},"AAsuf":null,"pubbbjahajqaki":"AA 1111","jahajqakipubbb":{"numeo":"1111","jahaaj":{"cde":"AA","couAAa":null,"aaoo":null}}},{"numeo":"1111","jahaaj":{"cde":"ARL_CT","couAAa":"CT","aaoo":"CTH"},"AAsuf":null,"pubbbjahajqaki":"CT 1111","jahajqakipubbb":{"numeo":"1111","jahaaj":{"cde":"CT","couAAa":null,"aaoo":null}}}],"saaaaaa":{"ffff":"RRR","mapr":"Finalised","SSSGeneral":"AAened","AAceptance":"Finalised","loddacctrr":"SheCT_Finalised","brrrrrrdd":"AAened","IIIernal":"110"}}]}]} host = mucAAuplfrAA02 -----------------------  
So I have these datasets uploaded to my Splunk Enterprise instance in Windows.  But under the little Edit menu there is no "Delete" Option   Is there a way to delete this manually or forcefull... See more...
So I have these datasets uploaded to my Splunk Enterprise instance in Windows.  But under the little Edit menu there is no "Delete" Option   Is there a way to delete this manually or forcefully?  
I have this event: (pool-4-thread-1 18a68b34-f4af-4940-9339-6201b5004bb8) (********): do_SMSGW (Request) : &from=TULBUR&to=********&text=*******:+Tanii+********+gereenii+tulburiin+uldegdel+59706.4... See more...
I have this event: (pool-4-thread-1 18a68b34-f4af-4940-9339-6201b5004bb8) (********): do_SMSGW (Request) : &from=TULBUR&to=********&text=*******:+Tanii+********+gereenii+tulburiin+uldegdel+59706.42T+tulbur+tulugduugui+tul+buh+heregleeg+2022-08-28-nd+haahiig+anhaarna+uu. (pool-4-thread-2 3adfc9d2-86e3-4e6e-8767-08f94370075a) (********): do_SMSGW (Request) : &from=TULBUR&to=********&text=*******:+Tanii+********+gereenii+tulburiin+uldegdel+9900T+tulbur+tulugduugui+tul+buh+heregleeg+2022-08-28-nd+haahiig+anhaarna+uu. And I need to get value between +uldegdel+"needed value"+tulbur+ please help, im new to splunk 
Hi, I have a search that uses the chart command to split by 2 fields, such that the results are shown below. The data is split by Name and Month. I would like to add a row with the average o... See more...
Hi, I have a search that uses the chart command to split by 2 fields, such that the results are shown below. The data is split by Name and Month. I would like to add a row with the average of all Names for each month, and a column with the average of all Months for each Name. I have tried using appendpipe and appendcols for each case, but couldn't quite figure out the syntax using a chart command. PS. Each row is already an appended subsearch.
I am using the Splunk Observability REST API, specifically "/apm/trace" endpoint.   I have following questions about throttling limits, which triggers HTTP code 429:  - What is the limit? - If thi... See more...
I am using the Splunk Observability REST API, specifically "/apm/trace" endpoint.   I have following questions about throttling limits, which triggers HTTP code 429:  - What is the limit? - If this is configured anywhere within Splunk Observability? - What types of throttling?  such as Hard, Soft, or Elastic / Dynamic - Does it use fixed window or rolling window, with or without counters? Thanks!
Hi,  Regarding Span, may I ask if the Span ID should always be unique, meaning no 2 different span with same Span ID? Thanks
Hello, Any suggestions on onboarding Cradlepoint Router logs to Splunk? Please advise.   Thanks in advance.
Hi All,   I am trying to search difference between 2 search:   search 1:    index="xxx_prd" "/XX900/LT_TEST"   this is returning like 20 records.   search 2: index="xxx_prd" "ht... See more...
Hi All,   I am trying to search difference between 2 search:   search 1:    index="xxx_prd" "/XX900/LT_TEST"   this is returning like 20 records.   search 2: index="xxx_prd" "http://xxx.yyy.com/XX900/LT_TEST" this is returning 15 records.   I want to get the 5 results which are different between search 1 and search 2.   pl advice.   Thanks Yatan
We are receiving error from _internal index  for Json logs: 1. error: ERROR JsonLineBreaker - JSON StreamId:1254678906 had parsing error:Unexpected character: "s" 2. error: ERROR JsonLineBreaker ... See more...
We are receiving error from _internal index  for Json logs: 1. error: ERROR JsonLineBreaker - JSON StreamId:1254678906 had parsing error:Unexpected character: "s" 2. error: ERROR JsonLineBreaker - JSON StreamId:1254678906 had parsing error:Unexpected character: "a"  sample logs: {  [-]        level: debug       message: Creating a new instance of inquiry call       timestamp: 2022-08-25T20:30:45.678Z }   my props.conf: TIME_PREFIX=timestamp" : " TIME_FORMAT= %Y-%m-%dT%H:%M:%S.%3N MAX_TIMESTAMP_LOOKAHEAD=40 TZ=UTC how to resolve this issue.
Hello everyone I have been trying to understand how this alert works because for my point of view doesn't make sense. This message NEVER disappears from our splunk instances and I have been trying ... See more...
Hello everyone I have been trying to understand how this alert works because for my point of view doesn't make sense. This message NEVER disappears from our splunk instances and I have been trying to catch the real root cause but I don't have clear the way this works. I have this message: The percentage of small buckets (75%) created over the last hour is high and exceeded the red thresholds (50%) for index=foo, and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=11, small buckets=8 So I checked if the logs have Time parsing issue and there are not issues with the logs indexed by foo index. Then I checked with this search: index=_internal sourcetype=splunkd component=HotBucketRoller "finished moving hot to warm" | eval bucketSizeMB = round(size / 1024 / 1024, 2) | table _time splunk_server idx bid bucketSizeMB | rename idx as index | join type=left index [ | rest /services/data/indexes count=0 | rename title as index | eval maxDataSize = case (maxDataSize == "auto", 750, maxDataSize == "auto_high_volume", 10000, true(), maxDataSize) | table index updated currentDBSizeMB homePath.maxDataSizeMB maxDataSize maxHotBuckets maxWarmDBCount ] | eval bucketSizePercent = round(100*(bucketSizeMB/maxDataSize)) | eval isSmallBucket = if (bucketSizePercent < 10, 1, 0) | stats sum(isSmallBucket) as num_small_buckets count as num_total_buckets by index splunk_server | eval percentSmallBuckets = round(100*(num_small_buckets/num_total_buckets)) | sort - percentSmallBuckets | eval isViolation = if (percentSmallBuckets > 30, "Yes", "No") | search isViolation = Yes | stats count   I ran that search for the last 2 days and the result is ZERO But the red-flag is still there... So I am not understanding what is going on. Here is the log the indicate that foo is rolling from hot to warm 08-30-2022 02:12:27.121 -0400 INFO HotBucketRoller [1405281 indexerPipe] - finished moving hot to warm bid=foo~19~AAD3329E-C8D9-4607-90FB-167760B4EB6F idx=foo from=hot_v1_19 to=db_1661054400_1628568000_19_AAD3329E-C8D9-4607-90FB-167760B4EB6F size=797286400 caller=size_exceeded _maxHotBucketSize=786432000 (750MB), bucketSize=797315072 (760MB) So as I can see the reason is logic caller=size_exceeded due to the size. Just for information this index receives data just once a day midnight. If you have any inputs I would really appreciate it.   Version 8.2.2
Hello  - I am getting the below error. I am trying to add pipe "|"  for all the results.  Error : Failed to parse templatized search for field 'ResponseTime(ms)' My search :  | table PeriodDate... See more...
Hello  - I am getting the below error. I am trying to add pipe "|"  for all the results.  Error : Failed to parse templatized search for field 'ResponseTime(ms)' My search :  | table PeriodDate VendorName ContractName OccMetricCode Pagekey TransactionType TransactionDatetime ResponseTime(ms) Comment | foreach * [ eval <<FIELD>>="|".<<FIELD>>."|"]    I am not getting pipe seperated results only for ResponseTime PeriodDate  ResponseTime(ms)  Comment  |2022/08/30| 0 ||   Thanks in advance  
I have a standalone instance with existing data on it. I have created a new indexer cluster that does not include this standalone machine. All instances are running the same OS and Splunk version. ... See more...
I have a standalone instance with existing data on it. I have created a new indexer cluster that does not include this standalone machine. All instances are running the same OS and Splunk version. Can I add the existing data to the cluster by adding the standalone instance to the cluster as a peer? What will the behavior be in such a case?  I'm aware of the bucket copying method, but I'm hoping there's a more hands-off method to accomplish this goal. 
Hello, I am looking at https://docs.splunk.com/Documentation/Splunk/9.0.0/Capacity/Parallelization and was wondering which systems to make changes on. For instance: batch parallelization: should ... See more...
Hello, I am looking at https://docs.splunk.com/Documentation/Splunk/9.0.0/Capacity/Parallelization and was wondering which systems to make changes on. For instance: batch parallelization: should the limits be changed on the search heads, indexers, or both? same question for datamodels, report acceleration and indexer parallelization. Oh for what it's worth, I am running splunk enterprise 9, on a C1/C11 deployment. -jason