All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

 
Check your index to see when data was last entered | metadata type=sourcetypes index=test | fieldformat recentTime=strftime(recentTime,"%F %T") | fieldformat firstTime=strftime(firstTime,"%F %T") | ... See more...
Check your index to see when data was last entered | metadata type=sourcetypes index=test | fieldformat recentTime=strftime(recentTime,"%F %T") | fieldformat firstTime=strftime(firstTime,"%F %T") | fieldformat lastTime=strftime(lastTime,"%F %T")
I have my reasons. I don't want to impose changes on the local. I need to use the original addon and add my correctly named addon to it, which would override the search= parameter in original one. ... See more...
I have my reasons. I don't want to impose changes on the local. I need to use the original addon and add my correctly named addon to it, which would override the search= parameter in original one. Orig add-on is Splunk_TA_openldap default/savedsearches.conf [Update openldap_user_lookup KV Store collection] request.ui_dispatch_app = search disabled = 0 alert.track = 0 cron_schedule = */2 * * * * dispatch.earliest_time = -4m dispatch.latest_time = -2m enableSched = 1 search = sourcetype="openldap:access" operation="BIND" | dedup conn cn | table conn op cn | rename cn as user | lookup openldap_user_lookup conn, op OUTPUTNEW _key AS _key | outputlookup append=t openldap_user_lookup My append is A10_aaa_ta_openldap default/savedsearches.conf [Update openldap_user_lookup KV Store collection] search = `openldap_index` sourcetype="openldap:access" operation="BIND" | dedup conn cn | table conn op cn | rename cn as user | lookup openldap_user_lookup conn, op OUTPUTNEW _key AS _key | outputlookup append=t openldap_user_lookup I know btool and I am using it.  There are more problems. One is that according to btool, the savedsearch.conf precedence does not behave as documented, i.e. app/user context with reverse reverse-lexicographic order. The second is that Splunk reports a problem with duplicate configuration. So far I haven't found any information in the documentation that savedsearches.conf should behave differently than for example macros, props etc.
Hi All  We have created a dashboard to monitor CCTV and it was working fine. However suddenly data stopped populating.  We have done any change.  My finding  1 - If i select last 30 days i can see... See more...
Hi All  We have created a dashboard to monitor CCTV and it was working fine. However suddenly data stopped populating.  We have done any change.  My finding  1 - If i select last 30 days i can see the dashboard working fine  2 - If i select time range last 20 days i can the dashboard is not working 3 - Started trouble shooting the issue and found the below  Spl query The below works fine when the time range is last 30 days  working - index=test 1sourcetype="stream" NOT upsModel=*1234* |rename Device AS "UPS " |rename Model AS "UPS Model" |rename MinRemaining AS "Runtime Remaining" |replace 3 WITH Utility, 4 WITH Bypass IN "Input Source" |sort "Runtime Remaining" |dedup "UPS Name" |table "UPS Name" "UPS Model" "Runtime Remaining" "Source" "Location" Note- The same spl query dont work when time range is last 20 days.  Trouble shooting - Splunk receiving data till date however i have notice few thing,  When i select last 30 days i can see the by fields in the search  UPS Name , UPS Model , Runtime Remaining , Source When i select last 20 days the below fields are missing not sure why?  Missing fields - UPS Name , UPS Model , Runtime Remaining , Source . So the below SPL query is not showing any data  index=test 1sourcetype="stream" NOT upsModel=*1234* |rename Device AS "UPS " |rename Model AS "UPS Model" |rename MinRemaining AS "Runtime Remaining" |replace 3 WITH Utility, 4 WITH Bypass IN "Input Source" |sort "Runtime Remaining" |dedup "UPS Name" -  |table "UPS Name" "UPS Model" "Runtime Remaining" "Source" "Location" The highlighted part not pulling any data due to missing field.   Thanks 
I have modified the api link of azure, and replaced all of them with the api url of azure China, but I can only collect a part of the data, not all of the data.
apiStartTime and apiEndTime are not set when info=completed but are set when info=granted - try something like this index=_audit action=search provenance=* info=granted host IN (...) (apiStartTime=... See more...
apiStartTime and apiEndTime are not set when info=completed but are set when info=granted - try something like this index=_audit action=search provenance=* info=granted host IN (...) (apiStartTime="ZERO_TIME" OR apiEndTime="ZERO_TIME") | table user, apiStartTime, apiEndTime, search_et, search_lt, search | convert ctime(search_*)  
Hi DavidLi   I didn't realise that after a year you would still reply, thank you so much!
Hi Team,    I am using a free trail version of Splunk. and forwarding logs from a Paloalto firewall to splunk. sometimes i am getting logs sometimes not . its seems to be a timeZone issue. my paloa... See more...
Hi Team,    I am using a free trail version of Splunk. and forwarding logs from a Paloalto firewall to splunk. sometimes i am getting logs sometimes not . its seems to be a timeZone issue. my paloalto firewall is in US/Pacific time Zone.  how can I check the Splunk timezone. and how can i configure it same on both the side.  #splunktimeZone
Hi @Taruchit , at first don't use the search command when you cn put all the parameters in the main search. Then I'd avoid to use all time in a search because you could have too many events, but de... See more...
Hi @Taruchit , at first don't use the search command when you cn put all the parameters in the main search. Then I'd avoid to use all time in a search because you could have too many events, but define a useful timerange. index=_audit action=search provenance=* info=completed host IN (...) (apiStartTime="ZERO_TIME" OR apiEndTime="ZERO_TIME") | table user, apiStartTime, apiEndTime, search_,et, search_lt, search | convert ctime(search_*) about the meaning of the results, they dependsa on the parameters you defined, probably with apiEndTime="ZERO_TIME" you don't have the apiStartTime field. Analyze your search and modify it to have the best results for you. Ciao. Giuseppe
| eventstats values(eval(if(status="Issue","Bad",null()))) as Health
Hi @nkavouris , you can use a subsearch to filter results in the main search passing the fields with the same name and putting attention to pass only the fields to use for filtering, in your case: ... See more...
Hi @nkavouris , you can use a subsearch to filter results in the main search passing the fields with the same name and putting attention to pass only the fields to use for filtering, in your case: keystone_time, serial_number, message, after but not model that isn't used in the main search. The problem is the message field because you need to use it as a part of the search, ib this case you have to rename it in "query": search index="june_analytics_logs_prod" [[search index="june_analytics_logs_prod" (message=* new_state: Diagnostic, old_state: Home*) | spath serial output=serial_number | spath message output=message | spath model_number output=model | eval keystone_time=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q"), before=keystone_time-10, after=_time+10, eval latest=strftime(latest,"%Y-%m-%d %H:%M:%S.%Q") | rename message AS query | fields keystone_time serial_number query after ] the renaming of message AS query permits to search in full text search mode. I didn't use it with other fields, only by itself, but it should run. Ciao. Giuseppe
Hi,  this error is normal the script catch errors. All values are good. The thing is, when i ingest these logs, and I set TIME_PREFIX, I have 2 values for timestamp just for one log not the others w... See more...
Hi,  this error is normal the script catch errors. All values are good. The thing is, when i ingest these logs, and I set TIME_PREFIX, I have 2 values for timestamp just for one log not the others whereas they have the same JSON format ... 
Yea, smae same but different.   yesterday i applied this  and it started working too. s/(\\")/"/g   on the data but now i do not see it in the sourcetype advance option, if i add it again the l... See more...
Yea, smae same but different.   yesterday i applied this  and it started working too. s/(\\")/"/g   on the data but now i do not see it in the sourcetype advance option, if i add it again the log quality will ruin again. so not sure how the TA messed up.
Hi @Dyrock , as you can see in https://www.splunk.com/en_us/resources/videos/getting-data-in-with-forwarders.html and read at https://docs.splunk.com/Documentation/Splunk/9.3.0/Data/Forwarddata and ... See more...
Hi @Dyrock , as you can see in https://www.splunk.com/en_us/resources/videos/getting-data-in-with-forwarders.html and read at https://docs.splunk.com/Documentation/Splunk/9.3.0/Data/Forwarddata and https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Forwarding/Aboutforwardingandreceivingdata You have to: configure the Indexer to receive logs from UFs (I suppose that 997 is a mistyping because the default port is 9997); configure the outputs.conf on your UF to send data to the indexers on the same port. configure the inputs on the UF. At this point you will see your logs in the Indexer. Ciao. Giuseppe
Hi @y71855872 , are you indexing pcap logs from a wireshark, as described in the instructions at https://splunkbase.splunk.com/app/2748 ? Then if you use a custom index, you have to put it in the d... See more...
Hi @y71855872 , are you indexing pcap logs from a wireshark, as described in the instructions at https://splunkbase.splunk.com/app/2748 ? Then if you use a custom index, you have to put it in the default search path or add it to all the dashboards as described in the instructions. Ciao. Giuseppe  
Is there any benefits to move "UNPSEC" back to null()? I usually just gave it "N/A" for string, and 0 for numeric. None whatsoever.  This is purely for people who want non-existent values to sh... See more...
Is there any benefits to move "UNPSEC" back to null()? I usually just gave it "N/A" for string, and 0 for numeric. None whatsoever.  This is purely for people who want non-existent values to show blank.
Yes, Your understanding is correct.
I am confused as to how to get this app to work. Can anyone provide me with a instruction sheet telling me what needs to be done? I have downloaded and installed the pcap analyzer app but can't seem ... See more...
I am confused as to how to get this app to work. Can anyone provide me with a instruction sheet telling me what needs to be done? I have downloaded and installed the pcap analyzer app but can't seem to get it to analyze. Can anyone help me?
What happens if the amount of data exceeds the daily limit in Splunk Cloud? 「Total ingest limit of your ingest-based subscription」 ・Data ingestion stops  or ・Splunk contacts you to discuss addi... See more...
What happens if the amount of data exceeds the daily limit in Splunk Cloud? 「Total ingest limit of your ingest-based subscription」 ・Data ingestion stops  or ・Splunk contacts you to discuss adding a license, but ingestion does not stop
Hello, This is my first experience with Splunk as I am setting up a lab. in VirtualBox I have: VM1: Act as server: Ubuntu desktop 24.04 LTS - IP: 192.168.0.33 - Installed Splunk Enterprise - Added... See more...
Hello, This is my first experience with Splunk as I am setting up a lab. in VirtualBox I have: VM1: Act as server: Ubuntu desktop 24.04 LTS - IP: 192.168.0.33 - Installed Splunk Enterprise - Added port 997 under configure receiving - Added Index, named it Sysmonlog.  VM2: Act as client: Windows 10 IP: 192.168.0.34 - Installed Sysmon - installed Splunk Forwarder - set the developer ip:192.168.0.34 port 8089 - set indexer 192.168.0.33 port 9997. ping result is successful form both VMs When I am about to add the forwarder in my indexer nothing shows up. how should I troubleshoot this to be able to add the forwarder?