All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello I have a question, I would like to know if there is any way to incorporate my dashboard on my website. and it is able to update by itself.   Thanks a lot!
My team uses playbooks to automate email alerts in Phantom. Some playbooks have been randomly sending emails with the replacement character (a black diamond with a white question mark). Other times t... See more...
My team uses playbooks to automate email alerts in Phantom. Some playbooks have been randomly sending emails with the replacement character (a black diamond with a white question mark). Other times the emails are working fine and have normal text. Has anyone had this issue in the past? If so, how did you resolve it?  I was thinking of updating the Splunk SMTP App in Phantom. Thanks for the help!
Hello, I have an app on our cloud SH named A and i wanted to rename it to B. Which config change is required to change an app name on Splunk cloud? I guess i need to open a case to splunk support s... See more...
Hello, I have an app on our cloud SH named A and i wanted to rename it to B. Which config change is required to change an app name on Splunk cloud? I guess i need to open a case to splunk support since we doesn't have a backend access but i am curious to know whether in app.conf this app rename should be changed or any other conf file? please advise.     Thanks,  
The page doesn't have a download link that I can find and nothing in the documentation. Has it been removed?   https://splunkbase.splunk.com/app/6250/#/details
One of our alerts, CSIRT - Threat_Activity_Detection,  came in on 8/31 but did not auto assign the Incident Type  that I created (csirt - threat_activity_detection) and therefore the Response Templat... See more...
One of our alerts, CSIRT - Threat_Activity_Detection,  came in on 8/31 but did not auto assign the Incident Type  that I created (csirt - threat_activity_detection) and therefore the Response Template I created (CSIRT – Threat Activity Detection) for that Incident did not get assigned.  Is this a bug or did I not configure this properly?
Hello One of my company's firewall ingest more logs every tuesday to splunk which makes us go over the 10G limit per day for our subscription. This only happens every tuesday. Does any one knows wh... See more...
Hello One of my company's firewall ingest more logs every tuesday to splunk which makes us go over the 10G limit per day for our subscription. This only happens every tuesday. Does any one knows what's the problem is, and how to bring the daily ingestion to uniformity? Thanks for the help E
This is the code  import requests import datetime now = datetime.datetime.now() # print(now) data = {'ticket_id':'CH-12345','response_code':200,'service':'Ec2','problem_type':'server_dow... See more...
This is the code  import requests import datetime now = datetime.datetime.now() # print(now) data = {'ticket_id':'CH-12345','response_code':200,'service':'Ec2','problem_type':'server_down','time':now} headers = { 'Content-Type': 'application/json' } response = requests.post('https://localhost:8089/servicesNS/nobody/TA-cherwell-data-pull/storage/collections/data/cherwell_data' ,headers=headers ,data=data, verify=False , auth=('admin' , 'changeme')) print(response.text)  This is the error I am getting  <msg type="ERROR">JSON in the request is invalid. ( JSON parse error at offset 1 of file "ticket_id=CH-12345&amp;response_code=200&amp;service=Ec2&amp;problem_type=server_down&amp;time=2022-08-31+20%3A28%3A53.237962": Unexpected character while parsing literal token: 'i' )</msg> Please let me know if you need any more help
Hello I have a little problem with Splunk! I have a table that basically contains data in the following way number value 1 A 1 B 2 C ... See more...
Hello I have a little problem with Splunk! I have a table that basically contains data in the following way number value 1 A 1 B 2 C 3 D 3 E   I would like to have a table like number value 1 A B 2 C 3 D E As you can see, I would like to have the data in the same cells.   If you have a solution 
Hello, what' the best way to compare averages between two non-adjacent time periods. I have bunch of api call events with response_time field. I need a dashboard, where I can see the performance dif... See more...
Hello, what' the best way to compare averages between two non-adjacent time periods. I have bunch of api call events with response_time field. I need a dashboard, where I can see the performance difference between last month and current month. If I try the following, somehow the averages are blank in dashboard, but click on the enlarging glass of the tile, I get a a search query with values. What am I missing? Is there an even more efficient and faster way?     <form> <label>API Performance</label> <search id="multisearch"> <query>| multisearch [ search earliest=$periodBeforeTok.earliest$ latest=$periodBeforeTok.latest$ index=A my_search_query response_time=* | eval response_time_before=response_time | fields api_request response_time_before | eval timeSlot="1" ] [search earliest=$periodAfterTok.earliest$ latest=$periodAfterTok.latest$ index=A my_search_query | eval response_time_after=response_time | fields api_request response_time_after | eval timeSlot="2" ] </query> </search> <fieldset submitButton="true" autoRun="false"> <input type="time" token="periodBeforeTok"> <label>Before Time Period</label> <default> <earliest>1658707200</earliest> <latest>1659312000</latest> </default> </input> <input type="time" token="periodAfterTok"> <label>After Time Period</label> <default> <earliest>1659312000</earliest> <latest>1659916800</latest> </default> </input> </fieldset> <row> <panel> <table> <title>Query Stats</title> <search base="multisearch"> <query>| stats count as totalCount, count(eval(timeSlot=1)) as totalCountBefore, count(eval(timeSlot=2)) as totalCountAfter, avg(response_time_before) as response_time_before, avg(response_time_after) as response_time_after by api_request | eval response_time_before=round(response_time_before/1000,3) | eval response_time_after=round(response_time_after/1000,3) | eval delta_response_time=response_time_after-response_time_before | table api_request totalCountBefore totalCountAfter response_time_before response_time_after delta_response_time</query> </search> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>    
I have a Universal Forwarder accepting syslog traffic from multiple sources.  The UF forwards up to indexers in Splunk Cloud. My question is two-fold:   If I need an Add-On like for VMware ESXI Logs... See more...
I have a Universal Forwarder accepting syslog traffic from multiple sources.  The UF forwards up to indexers in Splunk Cloud. My question is two-fold:   If I need an Add-On like for VMware ESXI Logs. Do I install that on the UF or request installation in Splunk Cloud? And if the latter, how does my UF know that I can now use any new sourcetypes?  I've read through the installation notes on a few Add-Ons and have not seen mention of how new sourcetypes are used outside of the server or instance the add-on is directly isntalled.   Thanks!
Hello, we had standalone search head and indexer in a pre-production environment then I created new clustered environment with 2 sh and 2 idx. I want to add those old non-clustered search head an... See more...
Hello, we had standalone search head and indexer in a pre-production environment then I created new clustered environment with 2 sh and 2 idx. I want to add those old non-clustered search head and indexer, could you let me know the right commands/procedures to add them to the existing cluster? Do I need to remove all Splunk instances and reinstall from scratch? I understand old non-clustered data may be removed but this is not a problem as mostly frozen. Thanks for your help.
Is there a more elegant way to do this? New to using rex & I can’t seem to strip out the multiple parentheses and slashes from a field without using replace.  (I don't have control over the data, I k... See more...
Is there a more elegant way to do this? New to using rex & I can’t seem to strip out the multiple parentheses and slashes from a field without using replace.  (I don't have control over the data, I know it is better to strip it out first.) These do work but in some cases there are more parentheses and slashes - is there a way to strip all of them out at once, or do I need to make repeating phrases? | rex mode=sed field=Field_A "s/\(\)/ /g" | rex mode=sed field=Field_B "s/\(\)/ /g" | rex mode=sed field=Field_B "s/\// /g"
After we upgraded to v9.0.1 we get a warning when following dashboard-generated links pointing "outside" splunk: Redirecting away from Splunk You are being redirected away from Splunk to ... See more...
After we upgraded to v9.0.1 we get a warning when following dashboard-generated links pointing "outside" splunk: Redirecting away from Splunk You are being redirected away from Splunk to the following URL: https://[some non-splunk web-server] Note that tokens embedded in a URL could contain sensitive information. It comes with a "Don't show again" option, but it indeed shows again every time. Is there somewhere to disable this warning? Thanks
Hello Everyone, I'm trying to write a custom Python Modular Input to fetch some HTML tables (all the Windows 10 release history tables) from the Microsoft Windows 10 Release Information. My idea is... See more...
Hello Everyone, I'm trying to write a custom Python Modular Input to fetch some HTML tables (all the Windows 10 release history tables) from the Microsoft Windows 10 Release Information. My idea is to create a modular input that runs once a month and uses pandas.read_html function to ingest all the Release History Tables and index all the rows into Splunk. I've figured out how to do the Python code but I've some issues with importing the pandas library into my custom app, I've read some Splunk Community posts and I've placed the exec_anaconda.py (from $SPLUNK_HOME\etc\apps\Splunk_SA_Scientific_Python_windows_x86_64\bin) inside %SPLUNK_HOME%\etc\apps\my_custom_app\bin and  also added the util folder (from %SPLUNK_HOME%\etc\apps\Splunk_ML_Toolkit\bin\utils) to avoid  the "ModuleNotFoundError: No module named 'util'" Python Exception. Also, as stated in the PSC README, I've placed the following lines right under the beginning of def collect_events(...) function:     def collect_event(helper, ew): import exec_anaconda exec_anaconda.exec_anaconda() import pandas ...     I keep getting the error: "ERROR Error encountered while loading Python for Scientific Computing, see search.log." But obviusly the search.log file is empty since this is not a SPL search.   Is it possible to use the PSC libraries inside my modular input to accomplish this? Thank you.
Hi Team, We found a lot of faults in Error transaction as HTTP Error 400 as it were normal behavior of the application. But, it was bothering the customer as it was showing a high % of Error BTs. ... See more...
Hi Team, We found a lot of faults in Error transaction as HTTP Error 400 as it were normal behavior of the application. But, it was bothering the customer as it was showing a high % of Error BTs. Is it possible to configure to ignore (exclude) the URL just when it becomes HTTP Error 400, but keep monitoring it for other normal state or another abnormal error state?  We want to add the URL: /axway/bill-payment/fetch-biller-info I have read through the Error Exception menu, unfortunately, there was not any way to exclude/ignore a specific URL, but how to exclude/ignore HTTP Error as a whole. Can anyone offer advice on how to possibly do it?
Hi everyone,    When I search for multiple items from multiselect, it is not working. I can search for "ALL" or one item only but not multiple items.  Here is the search:  index="billing_sales" ... See more...
Hi everyone,    When I search for multiple items from multiselect, it is not working. I can search for "ALL" or one item only but not multiple items.  Here is the search:  index="billing_sales" source="produced_movie" NAME_ENG IN ("$field1$") | stats sum(AMOUNT) as TOTAL   How do I change the above search so that I can look up multiple field1s?     {     "visualizations": {         "viz_7sJt3IPY": {             "type": "splunk.singlevalue",             "options": {                 "backgroundColor": "transparent",                 "majorColor": "#f8be44"             },             "dataSources": {                 "primary": "ds_i9R3dB04"             }         }     },     "dataSources": {         "ds_DCcDyt7v": {             "type": "ds.search",             "options": {                 "query": "index=\"billing_sales\" source=\"produced_movie_ddish\" \n| table CARD_NUMBER, NAME_ENG, DESCR, AMOUNT, PRODUCT_ID, TRANS_DATE, CONTENT_ID, PRODUCT_ID"             },             "name": "Search_1"         },         "ds_dCpthBJm": {             "type": "ds.chain",             "options": {                 "extend": "ds_DCcDyt7v",                 "query": "| stats count by NAME_ENG"             },             "name": "content_name"         },         "ds_i9R3dB04": {             "type": "ds.search",             "options": {                 "query": "index=\"billing_sales\" source=\"produced_movie_ddish\" NAME_ENG IN (\"$field1$\") \n| stats sum(AMOUNT) as DDISH_TOTAL"             },             "name": "Search_2"         }     },     "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$global_time.latest$",                         "earliest": "$global_time.earliest$"                     }                 }             }         }     },     "inputs": {         "input_global_trp": {             "type": "input.timerange",             "options": {                 "token": "global_time",                 "defaultValue": "-24h@h,now"             },             "title": "Global Time Range"         },         "input_1PggimcS": {             "options": {                 "items": [                     {                         "label": "All",                         "value": "*"                     }                 ],                 "defaultValue": "*",                 "token": "field1",                 "clearDefaultOnSelection": true             },             "dataSources": {                 "primary": "ds_dCpthBJm"             },             "title": "CONTENT_NAME",             "context": {                 "formattedConfig": {                     "number": {                         "prefix": ""                     }                 }             },             "type": "input.multiselect"         }     },     "layout": {         "type": "absolute",         "options": {             "display": "auto-scale",             "backgroundColor": "#294e70"         },         "structure": [             {                 "item": "viz_7sJt3IPY",                 "type": "block",                 "position": {                     "x": 20,                     "y": 10,                     "w": 200,                     "h": 90                 }             }         ],         "globalInputs": [             "input_global_trp",             "input_1PggimcS"         ]     },     "description": "",     "title": "content_producing_report" }
I have a folder with logs, every hour one logfile is written to it.  I also have an alert that is triggered when no file is written in the last hour (checking 15 minutes post a hour) Query: index=... See more...
I have a folder with logs, every hour one logfile is written to it.  I also have an alert that is triggered when no file is written in the last hour (checking 15 minutes post a hour) Query: index=xyz sourcetype=abc | eval since = now() - _time | search since < 3600 Mostly it works but sometimes it's triggered even if I can see in the history that logfile for that hour is present in splunk with acurate _time and nothing is missing What could be the problem?
Hi, I want to create a table from the sample log file entry by computing the field names based on the entries defined in the JSON structure. There will be multiple filed names and not just one.  e.... See more...
Hi, I want to create a table from the sample log file entry by computing the field names based on the entries defined in the JSON structure. There will be multiple filed names and not just one.  e.g. in, the JSON structure, it has entries like "something":"value" "something" will be the field name, and "value" will be the value that will form the table entries. By working in https://regex101.com I have got the regex query that is doing the job. However, when I try to put that in the Splunk search query, it does not like the "]" in the regex query I have generated.   This is the regex query: "((?:[^"\\\/\b\f\n\r\t]|\\u\d{4})*)" Query in Splunk  | rex "((?:[^"\\\/\b\f\n\r\t]|\\u\d{4})*)" Error in Splunk : Error in 'SearchParser': Mismatched ']'. This is the sample log: ------------------- 2022/08/31 04:33:10.897 | server| service| INFO | 1-223 |x.x.x.x.x.Payload | xxx-1111-1111111-11-111111111 | AAt: Update Headers: {AAgid=ID:jaaana-11111-1111111111111-3:487:1:1:50, cccc_ff_ssss=ABC_XYZ, ssssdel=false, cdmode=1, DelMode=2, abc_corel_id=xyx-11111-11111-11-111111, aa_rrr_cccc_cccc=AAAA, cust_svc_id=AAAA-DDD, crumberid=xyx-11111-11111-11-111111, svc_tran_origin=SSS, SSScoreed=Camel-SSS-1111-1111111-111, cccc_ff_ssss_aaaaa=AAAA, AAAType=null, cccc_ff_ssss_tata=AAA, AAAexxxx=0, avronnnn=url.add.add.com, AAAssssssss=1661920390882,tang_dik_jagah=ABC_XYZ, ver=0.1.2, AAAprrrrrr=4, AAArptooo=null, source_DOT_adaptr=mom, AAAjaaana=tAAic://toic,tang_dik_jagah_tata=AAA, targCTService=progr, SSScoreedAsBytes=[a@123, CamelAAARequestTimeout=600000, sedaTimeout=600000} {[{"type":"AAtiongo","pAAo":"AAAA","ssssssss":"2022-08-31 00:00:00","data":[{"chabbbi":"ca_1111_11111_AAtiongo_AAAA","tatajahajqaki":"AA 111","jahajqaki":{"numeo":"111","jahaaj":{"cde":"ARL_AA","couAAa":"AA","aaoo":"AAR"},"AAsuf":null},"sgnnnn":"AAR111","stppp":"J","muddStatuscde":"AA","kissak":"III","AAType3lc":"111","AAType5lc":"B111","rggggggg":"AAAAA","carrrrr":{"cde":"ARL_AA","couAAa":"AA","aaoo":"AAR"},"ddddddcde":"pubbb","pubbbjahajqaki":"AA 111","jahajqakipubbb":{"numeo":"111","jahaaj":{"cde":"AA","couAAa":null,"aaoo":null}},"sssss":1098,"kkkkkss":834,"kitnaba":{"AAAAAA":"2022-08-2100:00:00","WWWW":"2022-08-2100:00:00","eeeeee":"2022-08-2100:00:00","sssssss":"2022-08-2100:00:00","ddddddd":"2022-08-2100:00:00","eeeeeeee":"2022-08-2100:00:00","ddddddddd":"2022-08-2100:00:00","ttttttt":"2022-08-2100:00:00","ttttttt":"2022-08-2100:00:00","Edddddd":"2022-08-2100:00:00","ffffff":"2022-08-2100:00:00","ddddddL":"2022-08-2100:00:00","dddddd":"2022-08-2100:00:00","Adddddd":"2022-08-2100:00:00","ssssT":"2022-08-2100:00:00","ddddd":"2022-08-2100:00:00","ggggg":"2022-08-2100:00:00","ffffff":"2022-08-2100:00:00","Eddddd":"2022-08-2100:00:00","ssssss":"2022-08-2100:00:00","Eddddd":"2022-08-2100:00:00"},"durdddd":{"Exxxxx":"Pdddd.oo","ScfffTTTT":"xxx1H0M0.000S","xxxxIDL":"-Pxxxx6M0.000S","ESTTTT":"PxxxxH26M0.000S"},"gallle":[{"aaaaaaa":"aaa000033","gffffnnnn":"111"}],"stsssss":[{"hhhhhh":"AA1111111","standnnnn":"S20"}],"blttttt":[{"hhhhhh":"ABB000003","beltnnnn":"aa11","beltAAenpttttt":"2022-08-2100:00:00","kkkkkkkpttttt":"2022-08-2100:00:00"}],"redddddd":{"SSSSS":[{"aalllll":"ALLUU99999","resssssss":"AA1111111","resssssssnnnn":"S20","pprrrrrsssss":"AAA11111"}],"bgggg_blt":[{"aalllll":"aaaaaa1111111","resssssss":"ABB000003","resssssssnnnn":"IB02","kitnaba":{"AAAAAA":"2022-08-31006:14:00a","AAAAAA":"2022-08-31006:14:00a"}}],"aaaaaaaaaaa_sss":[{"aalllll":"aaaaaa8888888","resssssss":"false"}],"aaaaaaaaaa_ssss":[{"aalllll":"aaaaaa8888888","resssssss":"GAT000033","resssssssnnnn":"120","pprrrrrsssss":"GAT000019"}],"qqqqqqqqqqqq":[{"aalllll":"qqqqqqqqqqqq","resssssss":"false"}]},"kkkkkk":[{"cde":"aaa_sss","tatAAde":"CAI","aaaaAAde":"PPPP","legnumeo":1},{"cde":"ABC_XYZ","tatAAde":"AAA","aaaaAAde":"AAAA","legnumeo":2}],"cdeshareList":[{"numeo":"1111","jahaaj":{"cde":"ARL_AA","couAAa":"AA","aaoo":"AAA"},"AAsuf":null,"pubbbjahajqaki":"AA 1111","jahajqakipubbb":{"numeo":"1111","jahaaj":{"cde":"AA","couAAa":null,"aaoo":null}}},{"numeo":"1111","jahaaj":{"cde":"ARL_CT","couAAa":"CT","aaoo":"CTH"},"AAsuf":null,"pubbbjahajqaki":"CT 1111","jahajqakipubbb":{"numeo":"1111","jahaaj":{"cde":"CT","couAAa":null,"aaoo":null}}}],"saaaaaa":{"ffff":"RRR","mapr":"Finalised","SSSGeneral":"AAened","AAceptance":"Finalised","loddacctrr":"SheCT_Finalised","brrrrrrdd":"AAened","IIIernal":"110"}}]}]} host = mucAAuplfrAA02 -----------------------  
So I have these datasets uploaded to my Splunk Enterprise instance in Windows.  But under the little Edit menu there is no "Delete" Option   Is there a way to delete this manually or forcefull... See more...
So I have these datasets uploaded to my Splunk Enterprise instance in Windows.  But under the little Edit menu there is no "Delete" Option   Is there a way to delete this manually or forcefully?  
I have this event: (pool-4-thread-1 18a68b34-f4af-4940-9339-6201b5004bb8) (********): do_SMSGW (Request) : &from=TULBUR&to=********&text=*******:+Tanii+********+gereenii+tulburiin+uldegdel+59706.4... See more...
I have this event: (pool-4-thread-1 18a68b34-f4af-4940-9339-6201b5004bb8) (********): do_SMSGW (Request) : &from=TULBUR&to=********&text=*******:+Tanii+********+gereenii+tulburiin+uldegdel+59706.42T+tulbur+tulugduugui+tul+buh+heregleeg+2022-08-28-nd+haahiig+anhaarna+uu. (pool-4-thread-2 3adfc9d2-86e3-4e6e-8767-08f94370075a) (********): do_SMSGW (Request) : &from=TULBUR&to=********&text=*******:+Tanii+********+gereenii+tulburiin+uldegdel+9900T+tulbur+tulugduugui+tul+buh+heregleeg+2022-08-28-nd+haahiig+anhaarna+uu. And I need to get value between +uldegdel+"needed value"+tulbur+ please help, im new to splunk