All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Given an event log specification of: "{DateTime} Times: Online_1: CNCT_TM: {CNCT_TM}; LOG_TM: {LOG_TM} Online_2: CNCT_TM: {CNCT_TM}; LOG_TM: {LOG_TM} Offline_1: CNCT_TM: {CNCT_TM}; LOG_TM: {LOG_T... See more...
Given an event log specification of: "{DateTime} Times: Online_1: CNCT_TM: {CNCT_TM}; LOG_TM: {LOG_TM} Online_2: CNCT_TM: {CNCT_TM}; LOG_TM: {LOG_TM} Offline_1: CNCT_TM: {CNCT_TM}; LOG_TM: {LOG_TM} Offline_2: CNCT_TM: {CNCT_TM}; LOG_TM: {LOG_TM}" which is logged 4 times a day and an example entry like: "2021-12-08 14:31:59 Times: Online_1: CNCT_TM: 2021-12-08 14:47:13.873; LOG_TM: 2021-12-08 14:47:16.387; Online_2: CNCT_TM: 2021-12-08 14:47:49.837; LOG_TM: 2021-12-08 14:47:50.480; Offline_1: CNCT_TM: 2021-12-08 14:48:27.303; LOG_TM: 2021-12-08 14:48:28.927; Offline_2: CNCT_TM: 2021-12-08 14:48:56.673; LOG_TM: 2021-12-08 14:48:58.750" How do I evaluate and graph the time range in Minutes and Seconds (just seconds would be fine for me) between the maximum and minimum times embedded in the 8 times captured in the log entry?  Ultimately, I would like to create an alert if a time range greater than something like 30 minutes were to occur.
Hello, I am wondering what the best way to find a value in one my fields matches what is in a mv field. I cannot use mv expand and a where due to the storage limit I encounter. Is there a way to sho... See more...
Hello, I am wondering what the best way to find a value in one my fields matches what is in a mv field. I cannot use mv expand and a where due to the storage limit I encounter. Is there a way to show that "match" matches with the values in field2, but "miss" would not.   
Hey everyone! I have what I would consider a complex problem, and I was hoping to get some guidance on the best way to handle it. We are attempting to log events from an OpenShift (Kubernetes) envi... See more...
Hey everyone! I have what I would consider a complex problem, and I was hoping to get some guidance on the best way to handle it. We are attempting to log events from an OpenShift (Kubernetes) environment. So far, I've successfully gotten raw logs coming in from Splunk Connect for Kubernetes, into our heavy forwarder via HEC, and then into our indexer. The data that is being ingested has a bunch of metadata about the pod name, container name, etc. from this step. However, the problem is what to do with it from there. In the case of this specific configuration, the individual component logs are getting combined into a single stream, with a few fields of metadata attached to the beginning. After this metadata, the event 100% matches up with what I'd consider a "standard" event, or something Splunk is more used to processing. For example: tomcat-access;console;test;nothing;127.0.0.1 - - [08/Dec/2021:13:25:21 -0600] "GET /idp/status HTTP/1.1" 200 3984 This is first semicolon-delimited, and then space delimited, as follows: "tomcat-access" is the name of the container component that generated the file. "console" indicates the source (console or file name) "test" indicates the environment. "Nothing" indicates the User Token And everything after this semicolon is the real log. In this example, it is a Tomcat-access sourcetype. Compare this to another line in the same log: shib-idp;idp-process.log;test;nothing;2021-12-08 13:11:21,335 - 10.103.10.30 - INFO [Shibboleth-Audit.SSO:283] - 10.103.10.30|2021-12-08T19:10:57.659584Z|2021-12-08T19:11:21.335145Z|sttreic shib-idp is the name of the container component that generated the log idp-process.log is the source file in that component test is the environment nothing is the user token And everything after that last semicolon is the Shibboleth process log. Notably, this part uses pipes as delimiters. The SCK components, as I have them configured now, ship all these sources to "ocp:container:shibboleth" (or something like that). When they are shipped over, metadata is added for the container_name, pod_name, and other CRI-based log data. What I am aiming to do I would like to use the semicolon-delimited parts of the event to tell the heavy forwarder what sourcetypes to work with. Ideally, I would like to cut down on having to make my own sourcetypes and regex, but I can do that if I must. So for the tomcat-access example above, I'd want: All the SCK / Openshift related fields to stick with the event. The event to be chopped up into 5 segments The event type to be recognized by the first 2 fields (there is some duplication in the first field, so the second field would be the most important) The first 4 segments to be appended as field information (like "identifier" or "internal_source") The 5th segment to be exported to another sourcetype for further processing (in this case, "tomcat_localhost_access" from "Splunk_TA_tomcat"). All the other fields would stick with the system as Splunk_TA_tomcat did its field extractions. If this isn't possible, I could make a unique sourcetype transform for each event type - the source program has 8 potential sources. But that would involve quite a bit of duplication.   Even as I type this out, I'm getting the sinking feeling that I'll need to just bite the bullet and make 8 different transforms. But one can hope, right?   Any help would be appreciated. I've gotten through Sysadmin and data admin training, but nothing more advanced than that. I suspect I'll need to use this pattern in the future for other Openshift logs of ours, but I don't know at this stage.
Is there any way to have the Message area show below the Included results? I have a rather lengthy but important reference that requires the long-time recipients to scroll down through it each time ... See more...
Is there any way to have the Message area show below the Included results? I have a rather lengthy but important reference that requires the long-time recipients to scroll down through it each time the email is generated to see the results. Using the footer area is not an option as I don't have access to it and also do not want it showing up on any other email alert. Cheers!
We are not getting data in itsi_tracked_alerts and itsi_grouped_alerts .  We are not getting data in itsi_summary index  
Hello, I have a question about RUM, the new MINT replacement. And hopefully, this is the correct board. Our project is separated into two parts, server and client-side. The latter is an Android a... See more...
Hello, I have a question about RUM, the new MINT replacement. And hopefully, this is the correct board. Our project is separated into two parts, server and client-side. The latter is an Android application. In the past, we successfully implemented MINT into the app. The main use is receiving crash alerts, monitoring handled exceptions, and sending logs. Our server-side uses Splunk Enterprise for hosting their logs. In the past, we successfully managed to connect MINT to Splunk Enterprise via Data Collector and centralized all logs. It could be a Forwarder involved. The setup was done by a colleague who is not in the company anymore. Centralizing helps our cross teams search with ease just in one place. Do you know if a similar thing can be done with RUM? Can all the data collected by RUM be sent to Enterprise? I'm not very familiar with Splunk capabilities and any help will be appreciated! Thank you!
What is the best way to collect logs from Cloudflare? Once I'm not a AWS customer, I understand the app https://splunkbase.splunk.com/app/5114/  is not a option. Am I right? Thank in advance for you... See more...
What is the best way to collect logs from Cloudflare? Once I'm not a AWS customer, I understand the app https://splunkbase.splunk.com/app/5114/  is not a option. Am I right? Thank in advance for your help.
I have an alert that logs an event and sends an email. I am trying to add the timestamp of the event to the Log Event action, but it is not being added to the log event. The timestamp is correct in t... See more...
I have an alert that logs an event and sends an email. I am trying to add the timestamp of the event to the Log Event action, but it is not being added to the log event. The timestamp is correct in the alert's search table and also being added to the Email message correctly. However, it does not show up in the Log Event.   | eval event_timestamp==strftime(_time,"%Y-%m-%dT%H:%M:%S") | table event_timestamp   Log Event - [Event input]:   ... event_timestamp=$result.event_timestamp$ ...   Send Email action - [Message input]:   ... Event Timestamp: $result.event_timestamp$ Priority: XYZ ...   I have also noticed that if I put the timestamp before other fields in the 'Log Event' action, then those fields are also missing in the log. Any ideas why Log Event isn't working when adding a timestamp to it?
Hi,  I have a report that pulls daily transaction counts from a summary index.  Running the report for "month to date", I don't get results from every day.   My search is this:    index=summary se... See more...
Hi,  I have a report that pulls daily transaction counts from a summary index.  Running the report for "month to date", I don't get results from every day.   My search is this:    index=summary search_name=Summarization_Daily_Txn App IN ("XXX")) endpoint="ZZZZ" | bin _time span=1d | stats sum(Count) AS Txn_Count by _time | addcoltotals     Gives me this output: The Dec 2, 4th and 5th totals are missing.    Yes,  I have verified that there are counts for those days.  If I run the report so that it spans just Dec 4th and 5th, the counts show up.   Just not if I run it using earliest=@mon latest=@d Any ideas on what I am doing wrong?
Hi, I would have this need, that is to carry out a search that extracts all users who use iphone with SO = 9. * and then through the extracted users, search through them who has also used another dev... See more...
Hi, I would have this need, that is to carry out a search that extracts all users who use iphone with SO = 9. * and then through the extracted users, search through them who has also used another device. One solution would be to run the first search, get the list of all users and then do a new search with UsersId in input. First search: search model = iphone so = 9. * | table UserId Second search: search UserId IN (user list of the first search) model! = iphone Would it be possible to do this extraction with just one search? Thanks
Hi,  After upgrading to Splunk version 8.2.3 a few weeks ago, it was suddenly possible to remove/add a graph in a plot by clicking on the legend. Now, this feature is not working. Has this feature b... See more...
Hi,  After upgrading to Splunk version 8.2.3 a few weeks ago, it was suddenly possible to remove/add a graph in a plot by clicking on the legend. Now, this feature is not working. Has this feature been removed? I cannot find any documentation on this feature at all.  Appreciate any help!
Hello, I am trying to create a saved search that runs everyday and exports the results in a csv, what is a one line curl command that would allow me to do this?
I have set up a new splunk test environment with search head cluster (3 SH) and index cluster (2 IDX). Also added Splunk_SA_CIM first in version 4.18, in my latest test version 4.20.2. Splunk is wo... See more...
I have set up a new splunk test environment with search head cluster (3 SH) and index cluster (2 IDX). Also added Splunk_SA_CIM first in version 4.18, in my latest test version 4.20.2. Splunk is working fine, acclerated DM are working, which means they are searchable. After installing the sophos Central app https://splunkbase.splunk.com/app/6186/ I'm not able to search in my datamodel: | datamodel Authentication search   More simple: searching with tag is not working, index=* tag=authentication has the same error. Tested on a single splunk without problems. ??
After installing microsoft windows add on I could not see applicable tags for network resolution data model with respect to DNS logs. Why I could not see any tag? Any thoughts!
Hi. I have a developer, that works on a local app and maintains the source in Azure Dev/Ops. He is, of course, interested in automating the complete deployment process to our on-prem search head. ... See more...
Hi. I have a developer, that works on a local app and maintains the source in Azure Dev/Ops. He is, of course, interested in automating the complete deployment process to our on-prem search head. Has anyone tried this setup, and found some good practises? For some reason I'm not comfortable with having a dev/Ops agent running on the search head, where it potentially could have access to the filesystem etc. Any advise is greatly appreciated   Kind regards
The search you ran returned a number of fields that exceeded the current indexed field extraction limit='200' To ensure that all fields are extracted for search, set limits.conf: [kv] / indexed_kv_l... See more...
The search you ran returned a number of fields that exceeded the current indexed field extraction limit='200' To ensure that all fields are extracted for search, set limits.conf: [kv] / indexed_kv_limit to a number that is higher than the number of fields contained in the files that you index. Hi, I am getting above error while on the left side I have only 35-10 fields extracted during search time. Log is ingested with Splunk HEC using Splunk_TA_nix with linux_secure stanza. How can I detect what is causing above error as didn't find anything that will create indexed fields, etc...and I didn't see fields on the left created. How to troubleshoot this? With search like this, I got 11 fields | walklex index="<index_name>" type=field | search NOT field=" *" | stats list(distinct_values) by field
Im new to splunk ,  I created 15 users and had failed login attempts on some of them. how can i find the first 10 failed login attempts,with what command can i see this in splunk sourcetype="WinEv... See more...
Im new to splunk ,  I created 15 users and had failed login attempts on some of them. how can i find the first 10 failed login attempts,with what command can i see this in splunk sourcetype="WinEventLog:Security" eventcode 4625| top limit=10 "Account Name" I tried it brought all users but how do I integrate the failed part into it, am I walking on the wrong path?
Hi every one  I have some difficulty to count my consumedHostUnits  I have this commande :  index="dynatrace_hp" | search endpoint="infrastructure/hosts" | stats distinct_count(discoveredName) c... See more...
Hi every one  I have some difficulty to count my consumedHostUnits  I have this commande :  index="dynatrace_hp" | search endpoint="infrastructure/hosts" | stats distinct_count(discoveredName) count(consumedHostUnits) by "managementZones{}.name" | search "managementZones{}.name"="[Env]*" But the results d'ont returne the good information  ( i would like to have the total consumedhostUnis for all the host in a managementZone) Thx for you Help ! 
I have data in source which shows Y/N for fields investor, borrower, guarantor, benefic for each customer. Need to show a pie chart that shows count of investor, borrower, guarantor, benefic that are... See more...
I have data in source which shows Y/N for fields investor, borrower, guarantor, benefic for each customer. Need to show a pie chart that shows count of investor, borrower, guarantor, benefic that are Y based on the customer ID. Like if a customer has only investor = Y, then investor count will be 1 If a customer has all the fields = Y, then all the params count will be 1 only. Below is the data snip index="customerdata" | table CUS_CID_CUST_ID,CUS_IND_INVESTOR, CUS_IND_BORROWER, CUS_IND_GUARANTOR, CUS_IND_BENEFIC User wants to see like this  
Hi there. I was wondering... All the docs and howtos regarding index-time extractions say that you need to set field to indexed in fields.conf. Fair enough - you want your field indexed, you make i... See more...
Hi there. I was wondering... All the docs and howtos regarding index-time extractions say that you need to set field to indexed in fields.conf. Fair enough - you want your field indexed, you make it an indexed field. That's understandable. But what really happens if I do a index-time extraction (and/or an ingest-time eval) producing a new field but I don't set it as indexed one? Does the indexer/HF simply parses the field, uses it internally in the parsing queue but doesn't pass it downstream into indexing? Or is it parsed down into indexing but ignored there? Or is it sent to indexing and indexed but if search-head(s) don't know it's indexed, it's not used in search? Or any other option?