All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I would like to hide/not display the panel when there is no data. Is it possible in Splunk dashboard studio? If yes, how can we acheive it? Can anyone plz guide.
Hi, all   my understanding is splunk forwarders store data in the cache memory when transferring data to Splunk indexer. is there a way to set limits the amount of data stored in the Splunk For... See more...
Hi, all   my understanding is splunk forwarders store data in the cache memory when transferring data to Splunk indexer. is there a way to set limits the amount of data stored in the Splunk Forwarder's cache memory ???    
Does anyone know why a lookahead such as the following causes a dashboard panel to hang with "waiting for data", but works perfectly when run in an independent search?   rex field=foo "(?=\w+$)(?P<... See more...
Does anyone know why a lookahead such as the following causes a dashboard panel to hang with "waiting for data", but works perfectly when run in an independent search?   rex field=foo "(?=\w+$)(?P<bar>\w+$)"   Stranger still - if the rex command is ``` commented out ```, the issue continues to occur. For context, the panel is a tabular drilldown panel that uses a boolean token to display on/off, and two tokens for earliest and latest values, based on the selected "row" of a column chart using $row._time and relative_time($row._time$, "+1h"). The panel displays without issue when the rex is removed. Other rex commands work without issue. The solution in this case was to remove the lookahead entirely. However, given the status of "waiting for data", does anyone know the cause (and thus ways to avoid this issue in general)?
I have an use case where I need to run the analytics on top of data that lands into Splunk. So, I want to store all the data into S3 too as and when the data lands into Splunk. I would like to know... See more...
I have an use case where I need to run the analytics on top of data that lands into Splunk. So, I want to store all the data into S3 too as and when the data lands into Splunk. I would like to know the best possible way we have with latest version of Splunk Enterprise/Splunk Cloud platform to save copy of Splunk data into S3 as and when the data comes into Splunk. Please give suggestions on the same. Thanking you.
Hi, as I create an extraction field with regex, the field match is shown correct. I can check the regex on https://regex101.com/. The field is shown in raw events, if I tray to define next field ... See more...
Hi, as I create an extraction field with regex, the field match is shown correct. I can check the regex on https://regex101.com/. The field is shown in raw events, if I tray to define next field But in search the field is not found. Field extraction regex: (?<=Evaluation: )(?P<evaln_from_tr>.*)(?= NumOfChannels) Sample log line: [30-Apr-2022 05:52:40][XXX][getResults]Evaluation: zl_numcount NumOfChannels: 1 Permissions: Everybody read and write Verbose mode search without a new field:  index=XXX sourcetype=XXX host=XXX source="XXX.txt" method=geResults "* NumOfChannels: *" found 1485 events like example, with different names. Verbose mode search with a new field: index=XXX sourcetype=XXX host=XXX source="XXX.txt" method=geResults  evaln_from_tr="*"  "* NumOfChannels: *" found 0 events! Why the field is shown by extraction of another field but not found by search?
Error connecting: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that ... See more...
Error connecting: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed - please check the output of the `openssl verify` command for the certificates involved; note that if certificate verification is enabled (requireClientCert or sslVerifyServerCert set to "true"), the CA certificate and the server certificate should not have the same Common Name.. Your Splunk instance is specifying custom CAs to trust using sslRootCAPath configuration in server.conf's [sslConfig] stanza. Make sure the CAs in the appsCA.pem (located under $SPLUNK_HOME/etc/auth/appsCA.pem) are included in the CAs specified by sslRootCAPath. To do this, append appsCA.pem to the file specified by the sslRootCAPath parameter.
Hi, I have this Gantt for example that you see in stages and the time they took:   I need to find the critical path values and then make them in same color but different from stages that d... See more...
Hi, I have this Gantt for example that you see in stages and the time they took:   I need to find the critical path values and then make them in same color but different from stages that doesn't part of the critical path. is Splunk supports in find critical path in Gantt?  and if not, and I calculate this by myself, how can I change the color of this specific stages in the same Gantt ? for example I create this query that find the stage in critical path and keep them in stage_critical: index="abc" source="efg" | table Stage,STARTTIME,FINISHTIME,TIME_RUNNING,FEEDER_ID_NAME,dependOn,FEEDER_ID,username,id ,DUT| search FEEDER_ID_NAME=* | search id="1234" DUT IN (*) STARTTIME!="NULL" FINISHTIME=* | eval Stage=DUT.".".Stage | stats list(dependOn) as dependOn by id,DUT,STARTTIME,Stage,FINISHTIME | mvexpand dependOn | eval sp=split(dependOn," ") | mvexpand sp | dedup sp | eval dependOn=sp | eval dut2=replace(dependOn,DUT."_"."*"."_","==") | table * | rex field=dut2 "==(?<stage_critical>\w+)" | eval stage_critical=DUT.".".stage_critical | table *   in this query I want that every Stage that appears in stage_critical would be in red. would you help me?    
Hi  this is what appears to me when I try to complete the training: Denied Person Due to U.S. export compliance requirements, Splunk has blocked your access to Splunk web properties. We are in t... See more...
Hi  this is what appears to me when I try to complete the training: Denied Person Due to U.S. export compliance requirements, Splunk has blocked your access to Splunk web properties. We are in the process of reviewing this and you will get a welcome email from Splunk once the review is cleared. This review may take up to 2 business days. If you do not receive a welcome email from Splunk after 2 business days, feel free to reach out to support@splunk.com When reaching out, be sure to provide your full name, complete address, email, and the Splunk.com username you registered with. We will respond as soon as possible. I could not know the reason, can you help me?
Hello, My SPL expertise are limited. I'm trying to write a search which matches a sequence of events. I'm working with sysmon logs from a windows machine. first event is a file creation event wher... See more...
Hello, My SPL expertise are limited. I'm trying to write a search which matches a sequence of events. I'm working with sysmon logs from a windows machine. first event is a file creation event where Image ends with dllhost.exe and TargetFilename starts with C:\windows\system32\. something like:   index=sysmon EventID=11 Image="*dllhost.exe" TargetFilename="C:\\windows\\system32\\*"   next event is an image load event where Image starts with C:\windows\system32\ and Signature does not start with the keyword "Microsoft ". something like   index=sysmon EventID=7 Image="C:\\windows\\system32\\*" Signature != "Microsoft *"   Value of TargetFilename in Event 1 must be equal to value of ImageLoaded  field in Event 2. And Event 2 must occur within 1 minute of Event 1. I tried inner join, where I join results based on TargetFilename from Event 1and ImageLoaded  (renamed) from Event 2, But this solves only first part of the puzzle. I want both events to occur in a sequence i.e. join if Event 2 time is less than 1 minute of Event 1 time.  I don't know how to articulate this with SPL. Also I'd nice if someone can show me how to do all this with tstats Thanks
Hello, I have the dashboard panel  which gives latest time with respect to source and host, now I want to give a color to the rows where time exceeds more than one in last 7 days. Please help me ... See more...
Hello, I have the dashboard panel  which gives latest time with respect to source and host, now I want to give a color to the rows where time exceeds more than one in last 7 days. Please help me out. index=A OR index=B | stats latest(_time) as latest_time by source,host | eval latest_time=strftime(latest_time,"%d/%m/%y %H:%M:%S:%Q") | table latest_time,source,host|sort -latest_time when the time range is more than 24 hours the column should be in red.as mentioned below Thnak you in advance, Veeru. latest_time source host 01/05/22 23:19:08:898 trace.log y 30/04/22 23:19:08:597 SystemOut.log y 30/04/22 23:19:08:388 SystemOut.log x 30/04/22 23:19:08:388 trace.log x 30/04/22 23:19:05:611 SystemOut.log y 30/04/22 23:19:05:611 trace.log x 30/04/22 23:09:40:000 SystemOut.log y 30/04/22 23:06:05:000 SystemOut.log x 30/04/22 22:57:14:000 SystemOut.log y
I am trying to download Splunk Enterprise, but keep getting an error message telling me that there is an error loading the page and try in a few minutes. Looking at the address bar I'm assuming it's ... See more...
I am trying to download Splunk Enterprise, but keep getting an error message telling me that there is an error loading the page and try in a few minutes. Looking at the address bar I'm assuming it's got to do with acceptance of the EULA, but it doesn't give me an optiotn of doing that. I've tried downloadin other versions, for other OS's, older versions, but he same error keeps appearing. Trying to download on other PC's, other browsers, nothing works. How do people get into using Splunk, I can't Ignasz
Hi everybody, I have a DBinput that work normaly, valid connection, query that show data, etc. But the data very often, once or twice a day stop indexing data and I have to manually enable then d... See more...
Hi everybody, I have a DBinput that work normaly, valid connection, query that show data, etc. But the data very often, once or twice a day stop indexing data and I have to manually enable then disable it to make it work normal again. When I use     index=_internal source=*dbx2* mi_input://<my source>     Then the result was the job just dead stop at [action=start_executing_dbinput], no other action like check rising column, or action complete. Since it don't show anything other than stopping at action=start, no bug or error code, I don't know how to deal with it other than manually restart the job. Can anyone help me with this?
Block: 2022-02-14 02:30:00,046 [Worker-3] DEBUG User job started 2022-02-14 02:30:00,063 [Worker-3] DEBUG Calling importData 2022-02-14 02:30:00,063 [Worker-3] DEBUG Initializing External DB conn... See more...
Block: 2022-02-14 02:30:00,046 [Worker-3] DEBUG User job started 2022-02-14 02:30:00,063 [Worker-3] DEBUG Calling importData 2022-02-14 02:30:00,063 [Worker-3] DEBUG Initializing External DB connection 2022-02-14 02:30:00,063 [Worker-3] ERROR Exception occured 2022-02-14 02:30:00,067 [Worker-3] DEBUG url before binding 2022-02-14 02:30:00,560 [Worker-3] DEBUG inside finally... 2022-02-14 02:30:00,567 [Worker-3] DEBUG sending Notification Email 2022-02-14 02:30:00,567 [Worker-3] DEBUG User job ended 2022-02-14 02:30:00,046 [Worker-3] DEBUG User job started 2022-02-14 02:30:00,063 [Worker-3] DEBUG Calling importData 2022-02-14 02:30:00,063 [Worker-3] DEBUG Initializing External DB connection 2022-02-14 02:30:00,067 [Worker-3] DEBUG url before binding 2022-02-14 02:30:00,560 [Worker-3] DEBUG inside finally... 2022-02-14 02:30:00,567 [Worker-3] DEBUG sending Notification Email 2022-02-14 02:30:00,567 [Worker-3] DEBUG User job ended Expected output: 2022-02-14 02:30:00,063 [Worker-3] ERROR Exception occured 2022-02-14 02:30:00,067 [Worker-3] DEBUG url before binding 2022-02-14 02:30:00,560 [Worker-3] DEBUG inside finally... 2022-02-14 02:30:00,567 [Worker-3] DEBUG sending Notification Email Thanks in advance
hello I use the search below in order to calculate a percentage But I need to add + if s > s2 and - if s2 < s How to do this please?   `index` sourcetype="session" | bin _time span=15m | ... See more...
hello I use the search below in order to calculate a percentage But I need to add + if s > s2 and - if s2 < s How to do this please?   `index` sourcetype="session" | bin _time span=15m | eval time=strftime(_time,"%H:%M") | stats dc(s) as s by time | table s | appendcols [ search `index` sourcetype="session" earliest=-7d@d+7h latest=-7d@d+19h | bin _time span=15m | eval time=strftime(_time,"%H:%M") | stats dc(s) as s2 by time | table s2] | eval perc=round((s/s2)*100,1). "%" | table perc    
Hi Everyone, I want to override EVAL statement exist in Splunkbase TA but don't want to modify in splunkbase TA. So I create custom TA and put same EVAL statement+extra category which I want to ext... See more...
Hi Everyone, I want to override EVAL statement exist in Splunkbase TA but don't want to modify in splunkbase TA. So I create custom TA and put same EVAL statement+extra category which I want to extract but it is not working. Can anybody please help me how I can do that. Splunkbase TA config /opt/splunk/etc/apps/TA-microsoft/default/props.conf             EVAL-internal_message_id = case(category IN ("Events1", "Events2"),'properties.MessageId') Custom TA config /opt/splunk/etc/apps/A-csc_cyber_genric_sh_Splunk_TA/default/props.conf             EVAL-internal_message_id = case(category IN ("Events1","Events2","Events3"),'properties.MessageId') Thanks in Advance  
I'm fetching some data from API via a python script and passing it to Splunk. it's is not paring the JSON format. I've tested my output with JSON parser with no error. If I set the source type to som... See more...
I'm fetching some data from API via a python script and passing it to Splunk. it's is not paring the JSON format. I've tested my output with JSON parser with no error. If I set the source type to some custom I'm receiving events as a text. but when I'm putting source type as _json it is giving line breaking error expected : \  Below is the python script. I'm using json.dumps also while printing. Now I'm writing to the file and fetching with monitor.      # This sript is fetching data from virustotal api and passing to splunk. # checkpointing is enabled to drop duplicate events import json,requests,sys,time,os from datetime import datetime proxies = { 'https': 'http://security-proxy.emea.svc.corpintra.net:3128' } url = "https://www.virustotal.com/api/v3/intelligence/hunting_notifications" params = { 'limit' : 40, 'count_limit' : 10000 } headers = { "Accept": "application/json", "x-apikey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", } current_time = datetime.now() file_path = f'/opt/splunk/etc/apps/infy_ta_virustotal_livehunt_validation/bin/data/' complete_name = file_path + f'livehunt_{time.strftime("%Y_%m_%d_%H_%M_%S")}' keys_filename = f'/opt/splunk/etc/apps/infy_ta_virustotal_livehunt_validation/bin/keys.txt' def write_new_keys_in_file(keys_filename, keys_to_be_indexed): try: with open(keys_filename, 'w') as file: for key in keys_to_be_indexed: file.write(str(key)) file.write('\n') except Exception as e: print(e) def get_indexed_key(keys_filename): try: with open(keys_filename, 'r') as file: indexed_keys = file.read().splitlines() return indexed_keys except Exception as e: with open(keys_filename, 'w') as file: indexed_keys = [] return indexed_keys def get_json_data(url, headers, params, proxies): try: response = requests.get(url = url, headers=headers,params = params, proxies=proxies).json() return response except Exception as e: print(e) sys.exit(1) def write_to_file(complete_name, data): with open(complete_name, 'a') as f: json.dump(data, f) f.write('\n') def stream_to_splunk(json_response,indexed_keys, complete_name): try: keys_to_be_indexed = [] events_to_be_indexed = [] for item in json_response['data']: keys_to_be_indexed.append(item['id']) if item['id'] not in indexed_keys: write_to_file(complete_name = complete_name, data = item) events_to_be_indexed.append(item) print(json.dumps(events_to_be_indexed, indent = 4, sort_keys = True)) if len(events_to_be_indexed) else 1==1 return keys_to_be_indexed except Exception as e: print(e) def main(): try: json_response = get_json_data(url = url, headers = headers, params = params, proxies = proxies) indexed_keys = get_indexed_key(keys_filename = keys_filename) keys_to_be_indexed = stream_to_splunk(json_response = json_response, indexed_keys = indexed_keys, complete_name = complete_name) write_new_keys_in_file(keys_filename = keys_filename, keys_to_be_indexed = keys_to_be_indexed) except Exception as e: print(e) if __name__ == "__main__": main()      
Hi, I created a new trial account and want to work my way through one of the workshops to learn about the product.  Where/how can I access the EC2 instance that is spun up by default, so I can in... See more...
Hi, I created a new trial account and want to work my way through one of the workshops to learn about the product.  Where/how can I access the EC2 instance that is spun up by default, so I can install the collector?  I haven't been provided with an IP address or details where to find it. Thanks for any help! Rgds 
I am using version 8.2.3 Build: cd0848707637 Settings / data input / I entered ”dashboard” in the find box. And selected dashboard studio. I get the dashboard studio page. From the key features, I ... See more...
I am using version 8.2.3 Build: cd0848707637 Settings / data input / I entered ”dashboard” in the find box. And selected dashboard studio. I get the dashboard studio page. From the key features, I select dashboards. Then I click on create new dashboard. I get a popup that ask for; Dashboard title Edit id Description Permissions I then click create and get this message “You must select a dashboard type. “. My question is where do I select the dashboard type?
I am trying to work on props.conf to parse and break correctly.I am pushing data using CURL commands but it is sending 50 logs in one event.It worked through UI but failing when sent from CURL comman... See more...
I am trying to work on props.conf to parse and break correctly.I am pushing data using CURL commands but it is sending 50 logs in one event.It worked through UI but failing when sent from CURL commands.I want to break it into individual events .Only the first event start with   "{"sourcetype": "json","event": {" AND ends with "last_updated" (EXAMPLE:"last_updated": "2022-03-24T02:35:41.148727Z" },) .Rest of the events START WITH ID and end with last_updated....There are lot of nested ID in the event which I did not post but the syntax should be something that will break after last_updated   I want the events to BREAK AFTER THE "last_updated"  followed by closed flower brackets and the new event should start from  NOTE:ONLY THE first event start is different ..rest all events start with id and end with last_updated.   I tried BREAK_ONLY_BEFORE=\"\w*\"\:\s\"\d*\-\d*\-\d*\w\d*\:\d*\:\d*\.\d*\w\" ... but its not breaking correctly { "id":    Following are the sample events that I want to break Event1:   {"sourcetype": "json","event": { . . . . . }, "created": "2022-02-07", "last_updated": "2022-03-24T02:35:41.083145Z"   Event 2:   { "id": 150749, "name": "no hostname 1660322000234", . . . . . "created": "2022-02-07", "last_updated": "2022-03-24T02:35:41.148727Z" }   I used the below props...it worked uploading sample file via GUI but when I used this sourcetype in CURL through HEC it is not breaking. [ Netbox ] CHARSET=UTF-8 DATETIME_CONFIG=CURRENT LINE_BREAKER=([\r\n]+)\s+{ MUST_BREAK_BEFORE=\"\w*\"\:\s\"\d*\-\d*\-\d*\w\d*\:\d*\:\d*\.\d*\w\" NO_BINARY_CHECK=true SHOULD_LINEMERGE=false category=Custom disabled=false pulldown_type=true   CURL: curl -k http://10.xx.xx.xx:8088/services/collector/event -H 'Authorization: Splunk <TOKEN>' -d '{"sourcetype": "Netbox","event": '"$SITEINFO"'}'  
I am pulling Azure billing Subscriptions data from Microsoft Azure Add on for Splunk  it is only pulling 1000 records per interval(7200)  and  some time not getting any data  Can some one help on t... See more...
I am pulling Azure billing Subscriptions data from Microsoft Azure Add on for Splunk  it is only pulling 1000 records per interval(7200)  and  some time not getting any data  Can some one help on this?   Thanks in advance