All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Where are the props installed?  They must be on the first full instance (indexer or heavy forwarder) that touches the data. If the data is being onboarded via HEC, then it's possible the usual inges... See more...
Where are the props installed?  They must be on the first full instance (indexer or heavy forwarder) that touches the data. If the data is being onboarded via HEC, then it's possible the usual ingestion pipeline is bypassed.  Which HEC endpoint is used? BTW, to recognize the time zone, the TIME_FORMAT setting should be TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%Z  
Hello Splunkers!! I want my _time to be extracted and match with time filed in the events. This is token based data. We are using http token to fetch the data from the kafka to Splunk and all the de... See more...
Hello Splunkers!! I want my _time to be extracted and match with time filed in the events. This is token based data. We are using http token to fetch the data from the kafka to Splunk and all the default setting are under search app including ( inputs.conf and props.conf). I have tried props in the second screenshot under search app but nothing works. Please help me what to do to get the required _time match with time field? I have applied below settings but nothing work for me. CHARSET = UTF-8 AUTO_KV_JSON = false DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6NZ TIME_PREFIX = \"time\"\:\" category = Custom pulldown_type = true TIMESTAMP_FIELDS = time
App "installation" is just unpacking a tgz archive into etc/apps. So it looks like something you should handle runtime.
Hi Guys,  I want to provide support for Python 3.11 and Python 3.9 for my splunk app on Splunk Enterprise and Splunk Cloud. I don't want to publish multiple version of same app packaged with py3.... See more...
Hi Guys,  I want to provide support for Python 3.11 and Python 3.9 for my splunk app on Splunk Enterprise and Splunk Cloud. I don't want to publish multiple version of same app packaged with py3.9 compatible libraries and other with py3.11 compatible libraries.  I can include my dependencies in two folders lib3.7 and lib3.11. And then while installation, Is there any way I can check the python version available, and then set which lib folder to use for app ? Has anyone done something similar before ? Will this be achievable ? Regards Anmol Batra
This app is supported by the developer. Probably, your best route is to contact the developer directly..
As @tscroggins says, it is not possible to "completely avoid" the false positives and false negatives. At the end of the day, as with a lot of things, it comes down to money. How much does it cost yo... See more...
As @tscroggins says, it is not possible to "completely avoid" the false positives and false negatives. At the end of the day, as with a lot of things, it comes down to money. How much does it cost you / your organisation to respond to a positive alert only to find it was a false positive and therefore "wasted" cost? How much does it cost you / your organisation / your customers if you miss an "incident" due to a false negative? Lost orders? Damaged reputation? SLA breaches? These considerations can be taken into account when putting together a business case for improving your monitoring, taking on extra staff to respond to alerts, improving your infrastructure to reduce latency, rewriting your applications to be more robust and/or self-healing, etc. etc. Start looking too deeply and you won't sleep at night! Find a good enough / tolerable level of monitoring that gets you close but doesn't cost the earth!
Hello members,   i'm trying to integrate splunk wtih Group-ib DRP product but i'm facing issues with the application. I entered my API key and the username of the dashboard of sso and after redirec... See more...
Hello members,   i'm trying to integrate splunk wtih Group-ib DRP product but i'm facing issues with the application. I entered my API key and the username of the dashboard of sso and after redirection there are no results from the index or any information related to group-ib product   i installed this app : https://splunkbase.splunk.com/app/7124   i need to fix the problem as soon as possible .
Hi @splunklearner , in general, you have to locate your props.conf and transforms.conf files on your Search Heads for the Search Time transformations, on the first full Splunk instance (indexers ... See more...
Hi @splunklearner , in general, you have to locate your props.conf and transforms.conf files on your Search Heads for the Search Time transformations, on the first full Splunk instance (indexers or Heavy Forwarders not Universal Forwarders) where data pass through. In your case on SHs and on IDXs because you haven't HFs. Then you could also put them in UFs, but it isn't mandatory Ciao. Giuseppe
Not directly, no. Even if the source, e.g. a web server, and the destination, e.g. a Splunk indexer, have perfectly synchronized clocks (they do not), the time (latency) it takes to share information... See more...
Not directly, no. Even if the source, e.g. a web server, and the destination, e.g. a Splunk indexer, have perfectly synchronized clocks (they do not), the time (latency) it takes to share information between the source and the destination is greater than zero. That time is composed of reading the source clock, rendering the source event, writing the event to storage, reading the event from storage, serializing the event to the network, transmitting the event across the network, deserializing the event from the network, reading the destination clock, rendering the destination event, and writing the destination event to storage. The preceding list is not exhaustive and may vary. Just note that it takes time to go from A to B. There are delays everywhere! You can search by _indextime instead of _time using _index_earliest and _index_latest and very wide earliest and latest parameters: index=web status=400 earliest=0 latest=+1d _index_earliest=-15m@m _index_latest=@m however, it's still possible to miss events that have been given an _indextime value of T but aren't synchronized to disk until after your search executes. You can use real-time searches to see events as they arrive at indexers (or as they're written to storage, depending on the type of real-time search), but for your use case, time windows are still required, and events may still be missed.
Thank you @tscroggins @ITWhisperer .  How to avoid latency of ingestion in Splunk? Like can we completely avoid these false positives and false negatives? 
Hi @Karthikeya, False positive, false negative, etc. have the same definitions in Splunk that they have in statistics. I'm in the United States, and I find NIST/SEMATECH e-Handbook of Statistical M... See more...
Hi @Karthikeya, False positive, false negative, etc. have the same definitions in Splunk that they have in statistics. I'm in the United States, and I find NIST/SEMATECH e-Handbook of Statistical Methods, Chapter 6, "Process or Product Monitoring and Control," a useful day-to-day reference: https://www.itl.nist.gov/div898/handbook/index.htm. In your example, you're counting events. For example, a basic search scheduled to run every minute: index=web status=400 earliest=-15m@m latest=@m | status count | where count>5 gives you the count of status=400 events over the prior 15 minutes. In this context, false positive and false negative could relate to the time the events were generated and the delay between that time and the index time. If a status=400 event occurred at 00:14:59 but was indexed by Splunk at 00:15:04, then a search that executes at 00:15:01 for the interval [00:00:00, 00:015:00) would not count the event because it has not been indexed by Splunk. This is a false negative. You can reduce the probability of false negatives by adding a backoff to your search--1 minute in this example: index=web status=400 earliest=-16m@m latest=-1m@m | status count | where count>5 However, that will not eliminate all false negatives because there is still a non-zero probability that an event will be indexed outside your search time range. False positives are more typically associated with measuring against a model. Let's say you've modeled your application's behavior and determined that more than 5 status=400 events over a 15 minute interval likely indicates a client-side code deployment issue as opposed to "normal" client behavior. "More than 5" is associated with a control limit, for example a deviation from a mean; however, the number of status=400 events is a random variable. A bad client-side code deployment may trigger 4 status=400 events, which is a false negative, and a good client-side deployment may trigger 6 status=400 events, which is a false positive. Several Splunk value-added products like Splunk Enterprise Security and Splunk IT Service Intelligence provide ready-to-run modeling and monitoring solutions, but in general, you would model your application's behavior using either traditional methods outside Splunk or statistical functions or an add-on like the Splunk Machine Learning Toolkit inside Splunk. You would then apply your model using custom Splunk searches.
Thank you @tscroggins 
It is not a malfunction of Splunk - false positives and negatives could arise if your monitoring solution is not robust enough for your requirements. For example, in your scenario, if you are monitor... See more...
It is not a malfunction of Splunk - false positives and negatives could arise if your monitoring solution is not robust enough for your requirements. For example, in your scenario, if you are monitoring every 15 minutes, let's say at 00, 15, 30 and 45 minutes past the hour but you get 400 errors at 12, 13, 14, 15, 16, and 17, you have 6 errors but 3 fall into 00-14 time bucket and 3 fall into 15-29 time bucket. Would you say this is a missed alert (false negative) or something you would tolerate? In another scenario, let's say you have errors occurring 13, 14, 15, 16, 28 and 29, but the 13 and 14 errors arrive late so they are picked up in the 15-29 time bucket, so you raise an alert, this might be seen as a false positive, i.e. an alert that you didn't really want. It all comes down to what your requirements are and what tolerances you are prepared to accept in your monitoring environment.
Hi @NavS, Refer to https://docs.splunk.com/Documentation/SplunkCloud/latest/Service/SplunkCloudservice for supported data egress methods: Data Egress Dynamic Data Self-Storage export of aged d... See more...
Hi @NavS, Refer to https://docs.splunk.com/Documentation/SplunkCloud/latest/Service/SplunkCloudservice for supported data egress methods: Data Egress Dynamic Data Self-Storage export of aged data per index from Splunk Cloud Platform to Amazon S3 or Google Cloud Storage No limit to the amount of data that can be exported from your indexes to your Amazon S3 or Google Cloud Storage account in the same region. Dynamic Data Self-Storage is designed to export 1 TB of data per hour. Data Egress Search results via UI or REST API Recommend no more than 10% of ingested data For optimal performance, no single query, or all queries in aggregate over the day from the UI or REST API, should return full results of more than 10% of ingested daily volume. To route data to multiple locations, consider solutions like Ingest Actions, Ingest Processor, or the Edge Processor solution. Data Egress Search results to Splunk User Behavior Analytics (UBA) No limit Data as a result of search queries to feed into Splunk User Behavior Analytics (UBA). To stream events to both Splunk Cloud and another destination, an intermediate forwarding solution is required. You should contact your client's Splunk account team for confirmation, but your Splunk Cloud native options are likely limited to the table above.
Hi Splunk Community, I need advice on the best approach for streaming logs from Splunk Cloud Platform to an external platform. The logs are already being ingested into Splunk Cloud from various appl... See more...
Hi Splunk Community, I need advice on the best approach for streaming logs from Splunk Cloud Platform to an external platform. The logs are already being ingested into Splunk Cloud from various applications used by my client's organization. Now, the requirement is to forward or stream these logs to an external system for additional processing and analytics. #Splunk cloud Thank you  Nav
Thank you @ITWhisperer  So on daily basis in splunk environment, what will be the most possible and frequent scenario in above 4 cases? How to avoid that? So you are saying false alerts will be trig... See more...
Thank you @ITWhisperer  So on daily basis in splunk environment, what will be the most possible and frequent scenario in above 4 cases? How to avoid that? So you are saying false alerts will be triggered but condition will not be met...how is it possible? What is the mechanism for false positives? Example: from status=400 reaches count more than 5 in last 15 min alert should be triggered. We will correctly set the alert. But still why alert will be triggered? Is it malfunction of Splunk? I didn't get you. Is false positives generally happen? Can you please more detail on this. Thanks once again. 
A false positive is something that is reported as being true when it is false. A false negative is something that is reported as being false when it is actually true. In monitoring terms, this coul... See more...
A false positive is something that is reported as being true when it is false. A false negative is something that is reported as being false when it is actually true. In monitoring terms, this could be related to, for example, an alarm being raised when the condition / threshold has not been reached (false positive) or an alarm not being raised when the condition / threshold has been reached (false negative). Both these situations should be avoided whenever possible, although for some environments, this is not always possible. If these perfect monitoring scenario cannot be reached, you have to decide at what point the number of false alarms are tolerable for your organisation.
Hello, let me explain my architecture. Multi site cluster (3 site cluster)... 2 indexers, 1 SH, 2 syslog servers (UF installed)... In each site 1 Dep server, 1 Deployer overall, 2 cluster managers... See more...
Hello, let me explain my architecture. Multi site cluster (3 site cluster)... 2 indexers, 1 SH, 2 syslog servers (UF installed)... In each site 1 Dep server, 1 Deployer overall, 2 cluster managers (1 stand by)... As of now, network logs are configured to our syslog server and UF forward the data to indexers. We will configure logs with the help of FQDN.  For example we have X application which may or may not contain FQDN. If it contains FQDN, it will go to that app index or else it will go to different index. (Wrote these props and transforms in cluster manager). In deployment server inputs.conf we just given log path along with different index (which specified in transforms of Cluster manager). So all the logs will flow to cluster manager and then we wrote props and transforms to filter the data. Is there any other way to write these configurations other than this? Giving props and transforms of cluster manager: cat props.conf [f5_waf] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g LINE_BREAKER = ([\r\n]+)[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s SHOULD_LINEMERGE = False TRUNCATE = 10000 # Leaving PUNCT enabled can impact indexing performance. Customers can # comment this line if they need to use PUNCT (e.g. security use cases) ANNOTATE_PUNCT = false TRANSFORMS-0_fix_hostname = syslog-host TRANSFORMS-1_extract_fqdn = f5_waf-extract_fqdn TRANSFORMS-2_fix_index = f5_waf-route_to_index   cat transforms.conf # FIELD EXTRACTION USING A REGEX [f5_waf-extract_fqdn] SOURCE_KEY = _raw REGEX = Host:\s(.+)\n FORMAT = fqdn::$1 WRITE_META = true # Routes the data to a different index-- This must be listed in a TRANSFORMS-<name> entry. [f5_waf-route_to_index] INGEST_EVAL = indexname=json_extract(lookup("fqdn_indexname_mapping.csv", json_object("fqdn", fqdn), json_array("indexname")), "indexname"), index=if(isnotnull(indexname), indexname, index), fqdn:=null(), indexname:=null()   cat fqdn_indexname_mapping.csv fqdn indexname selenium.systems.us.fed xxx_app_selenium1 v-testlab-service1.systems.us.fed xxx_app_testlab_service1   Gone through documents but just asking for any better alternatives?? 
What exactly is false positives, false negatives, true positives, true negatives means? How to identify them in Splunk and can we trigger them and how it is useful to us in monitoring Splunk? Please ... See more...
What exactly is false positives, false negatives, true positives, true negatives means? How to identify them in Splunk and can we trigger them and how it is useful to us in monitoring Splunk? Please explain.
Hi @jaibalaraman yes this should be fine, visit compatibility Matrix: https://docs.splunk.com/Documentation/Splunk/9.3.2/Installation/Systemrequirements#Supported_Operating_Systems