All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Read my notes and kept trying until I got it!  index=etims_na sourcetype=etims_prod platformId=5 bank_fiid=COST | eval response_time=round(if(strftime(_time,"%Z") == "EDT",((j_timestamp-entry_t... See more...
Read my notes and kept trying until I got it!  index=etims_na sourcetype=etims_prod platformId=5 bank_fiid=COST | eval response_time=round(if(strftime(_time,"%Z") == "EDT",((j_timestamp-entry_timestamp)-14400000000)/1000000,((j_timestamp-entry_timestamp)-14400000000)/1000000-3600),3) | stats count AS Transactions count(eval(response_time <= 1)) AS "Good" count(eval(response_time <= 2)) AS "Fair" count(eval(response_time > 2)) AS "Unacceptable" avg(response_time) AS "Average" BY bank_fiid | eval "%Good"=(Good/Transactions)*100 | eval "%Fair"=(Fair/Transactions)*100 | eval "%Unacceptable"=(Unacceptable/Transactions)*100 | addinfo | eval "Report Date"=strftime(info_min_time, "%m/%Y") | table bank_fiid, "Transactions", "Good", "%Good" "Fair", "%Fair", "Unacceptable", "%Unacceptable", "Average", "Report Date" | rename bank_fid as "Vision ID"
Valid keys are documented in the related .spec file in $SPLUNK_HOME/etc/system/README.  For instance, for the keys for props.conf see $SPLUNK_HOME/etc/system/README/props.conf.spec. You also can fin... See more...
Valid keys are documented in the related .spec file in $SPLUNK_HOME/etc/system/README.  For instance, for the keys for props.conf see $SPLUNK_HOME/etc/system/README/props.conf.spec. You also can find the information in the Admin Manual.
Hi, We have a custom python service being monitored by APM using the Opentelemetry agent. We have been successful in tracing spans related to our unsupported database driver (clickhouse-driver) but ... See more...
Hi, We have a custom python service being monitored by APM using the Opentelemetry agent. We have been successful in tracing spans related to our unsupported database driver (clickhouse-driver) but are wondering if there is some tag we can use to get APM to recognize these calls as database calls for the purposes of the "Database Query Performance" screen. I had hoped we could just fill out a bunch of the `db.*` semantic conventions but none have so far worked to get it to show as a database call (though the instrumented data do show up in the span details). Any tips?
Is there a way to get a list of valid keys for a stanza? For example: If you get "Invalid key in stanza" for something like: [file_integrity] exclude = /file/path It doesn't like the "exclu... See more...
Is there a way to get a list of valid keys for a stanza? For example: If you get "Invalid key in stanza" for something like: [file_integrity] exclude = /file/path It doesn't like the "exclude" but is there an alternative "key" value to accomplish the same? Thanks in advance!  
I'm trying to achieve the following output using the table command, but am hitting a snag.  Vision ID Transactions Good % Good Fair % Fair Unacceptable % Unacceptable Average ... See more...
I'm trying to achieve the following output using the table command, but am hitting a snag.  Vision ID Transactions Good % Good Fair % Fair Unacceptable % Unacceptable Average Response Time Report Date ABC STORE (ABCD) 159666494 159564563 99.9361601 101413 0.063515518 518 0.000324426 0.103864001 Jul-24 Total 159666494 159564563 99.9361601 101413 0.063515518 518 0.000324426 0.103864001 Jul-24                     Thresholds   response <= 1s   1s < response <= 3s 3s < response       Here is my broken query: index=etims_na sourcetype=etims_prod platformId=5 bank_fiid = ABCD | eval response_time=round(if(strftime(_time,"%Z") == "EDT",((j_timestamp-entry_timestamp)-14400000000)/1000000,((j_timestamp-entry_timestamp)-14400000000)/1000000-3600),3) | stats count AS Total count(eval(response_time<=1)) AS "Good" count(eval(response_time<=2)) AS "Fair" count(eval(response_time>2)) AS "Unacceptable" avg(response_time) AS "Average" BY Vision_ID | eval %Good= round((Good/total)*100,2), %Fair = round((Fair/total)*100,2), %Unacceptable = round((Unacceptable/total)*100,2) | addinfo | eval "Report Date"=strftime(info_min_time, "%m/%Y") | table "Vision_ID", "Transactions", "Good", "%Good" "Fair", "%Fair", "Unacceptable", "%Unacceptable", "Average", "Report Date" The help is always appreciated. Thanks!
Thanks @PaulPanther, I checked the link on my SH but not sure what exactly I am looking for. I did search for missing logs (secure and audit.log) but didn't see anything but at the same time didn't s... See more...
Thanks @PaulPanther, I checked the link on my SH but not sure what exactly I am looking for. I did search for missing logs (secure and audit.log) but didn't see anything but at the same time didn't see mention of logs those are being ingested, like message and cron. Thanks for your help.
I have done many Splunk React apps (Enterprise Security, Mission Control, etc.). In my experience, the easiest way is to embed your React code in an app. One of the clearest ways to do this is to fol... See more...
I have done many Splunk React apps (Enterprise Security, Mission Control, etc.). In my experience, the easiest way is to embed your React code in an app. One of the clearest ways to do this is to follow the instructions provided in the Splunk UI library. It can be somewhat daunting at first to use Splunk UI and it's build scripting, but so many things will just work once you have your React code packaged in app. Authentication won't be an issue at all for you anymore and you can easily call Splunk endpoints and they will just work.
| savedsearch "Incident Review - Main" time_filter="" event_id_filter="" source_filter="" security_domain_filter="" status_filter="status=\"1\"" owner_filter="" urgency_filter="urgency=\"critical\" O... See more...
| savedsearch "Incident Review - Main" time_filter="" event_id_filter="" source_filter="" security_domain_filter="" status_filter="status=\"1\"" owner_filter="" urgency_filter="urgency=\"critical\" OR urgency=\"high\" OR urgency=\"medium\" OR urgency=\"low\" OR urgency=\"unknown\"" tag_filter=""
similar error here, resolved i had a correlation rule is ES calling saved search, it required the addition of "type_filter"
Hi @fvincenzi , the easiest way is to ask to Splunk Support. Otherwise some weeks ago, someone hinted a site containing old versions. Ciao. giuseppe
Hello, In need download splunk enterprise 7.2.* in order to upgrade from version 6.6. Where can i find the older versions?   Thank you
Looking to add tooltip sting of site names included in the same lookup file as the long lat on a cluster map.   IS this even possible?
I am looking to add text as well.   I am trying to add tooltip string but havent had any luck.
Hi @MK3 , sorry but there's some confision in your question: to forward data from Forwarders to Splunk Enterprise you have to follow the instructions at: https://docs.splunk.com/Documentation/Splu... See more...
Hi @MK3 , sorry but there's some confision in your question: to forward data from Forwarders to Splunk Enterprise you have to follow the instructions at: https://docs.splunk.com/Documentation/SplunkCloud/latest/Forwarding https://docs.splunk.com/Documentation/Splunk/9.3.0/Data/Forwarddata to forward data you need outputs.conf that can be in $SPLUNK_HOME/etc/system/local or a  dedicated app. to take logs, you need inputs.conf that's in the same folder. props.conf and transforms.conf are  in the same folder, but usually aren't relevant on Forwarders (if Universal) $SPLUNK_HOME is the folder where you installed Splunk, by default it's C:\Program Files\splunk on Windows and /opt/splunk on Linux. You cannot send indexed data from an Heavy Forwarder, because it doesn't index data, but maybe you mean coocked data: you can send coocked (or uncooked data) to a third party using syslog. To send data to an external database you must use DB-Connect on Search Heads, but it's a different thing. Ciao. Giuseppe
hello, as per https://docs.splunk.com/Documentation/Splunk/9.3.0/Forwarding/EnableforwardingonaSplunkEnterpriseinstance where are the files like outputs, props and transforms stored? i am using spl... See more...
hello, as per https://docs.splunk.com/Documentation/Splunk/9.3.0/Forwarding/EnableforwardingonaSplunkEnterpriseinstance where are the files like outputs, props and transforms stored? i am using splunk web enterprise. Also where is my $splunk_home? am trying to setup heavy forwarding to send indexed data to a database on a schedule. thanks
Still same error also our application has multiple jars.
Hi @Easwar.C, Can you confirm if the latest reply helped answer your post or not? If not, reply back and keep the conversation going. 
I'm using the Splunk TA for linux to collect serverlogs. Some background Looking in the "_internal" log I am seing a lot of these errors: 08-23-2024 15:52:39.910 +0200 WARN DateParserVerbose [6460... See more...
I'm using the Splunk TA for linux to collect serverlogs. Some background Looking in the "_internal" log I am seing a lot of these errors: 08-23-2024 15:52:39.910 +0200 WARN DateParserVerbose [6460 merging_0] - A possible timestamp match (Wed Aug 19 15:39:00 2015) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: source=lastlog|host=<hostname>|lastlog|13275 08-23-2024 15:52:39.646 +0200 WARN DateParserVerbose [6460 merging_0] - A possible timestamp match (Fri Aug 7 09:08:00 2009) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: source=lastlog|host=<hostname>|lastlog|13418 08-23-2024 15:52:32.378 +0200 WARN DateParserVerbose [6506 merging_1] - A possible timestamp match (Fri Aug 7 09:09:00 2009) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: source=lastlog|host=<hostname>|lastlog|13338 This is slightly confusing and somewhat problematic as  the "lastlog" is collected not through a filewatch but from scripted output. The "lastlog" file is not collected/read and a stats-check on the file confirms accurate dates. However, this is not the source of the problem. I cannot se anything in the output from the commands in the script (Splunk_TA_nix/bin/lastlog.sh) which would indicate the precense of a "year"/timestamp. The indexed log does not contain "year" and the actual _time timestamp is correct. These "years" in "_internal" are also from a time when the server was not running/present, so they are not collected from any actual source "on the server". And the questions - Why am I seeing these errors - From where are these problematic "timestamps" generated - How do I fix the issue All the best  
Hi @Jeffrey.Escamilla, It looks like the community has not yet jumped in to reply. Have you happened to find a solution or more information you can share? If you still need help with this questi... See more...
Hi @Jeffrey.Escamilla, It looks like the community has not yet jumped in to reply. Have you happened to find a solution or more information you can share? If you still need help with this question, you can reach out to your CSM or contact AppDynamics Support: AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases.  If you get an answer, it would be helpful if you could come back and share it here. 
Hello,   I want to create a dataset for Machine Learning, I want kpi name and Service Health Score as field name and their value as value for last 14 days, how do i retrieve kpi_value and health_... See more...
Hello,   I want to create a dataset for Machine Learning, I want kpi name and Service Health Score as field name and their value as value for last 14 days, how do i retrieve kpi_value and health_score value, is it stored somewhere in itsi index? I cannot find kpi_value field in index=itsi_summary #predictive analaytics #machine learning, splunk it #predictive analytic  Splunk Machine Learning Toolkit  #Splunk ITSI Also, if you have done Machine Learning / Predictive ANalytics in your environment, please suggest a approach